A brand new AI coding problem has revealed its first winner β and set a brand new bar for AI-powered software program engineers.Β
On Wednesday at 5pm PST, the nonprofit Laude Institute introduced the primary winner of the Ok Prize, a multi-round AI coding problem launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian immediate engineer named Eduardo Rocha de Andrade, who will obtain $50,000 for the prize. However extra shocking than the win was his ultimate rating: he received with right solutions to simply 7.5% of the questions on the take a look at.
βWeβre glad we constructed a benchmark that’s truly laborious,β stated Konwinski. βBenchmarks must be laborious in the event that theyβre going to matter,β he continued, including: βScores could be totally different if the large labs had entered with their greatest fashions. However thatβs type of the purpose. Ok Prize runs offline with restricted compute, so it favors smaller and open fashions. I really like that. It ranges the taking part in area.β
Konwinski has pledged $1 million to the primary open-source mannequin that may rating larger than 90% on the take a look at.
Just like the well-known SWE-Bench system, the Ok Prize checks fashions towards flagged points from GitHub as a take a look at of how effectively fashions can take care of real-world programming issues. However whereas SWE-Bench relies on a hard and fast set of issues that fashions can prepare towards, the Ok Prize is designed as a βcontamination-free model of SWE-Bench,β utilizing a timed entry system to protect towards any benchmark-specific coaching. For spherical one, fashions had been due by March twelfth. The Ok Prize organizers then constructed the take a look at utilizing solely GitHub points flagged after that date.
The 7.5% prime rating stands in marked distinction to SWE-Bench itself, which at the moment exhibits a 75% prime rating on its simpler βVerifiedβ take a look at and 34% on its more durable βFullβ take a look at. Konwinski nonetheless isnβt certain whether or not the disparity is because of contamination on SWE-Bench or simply the problem of gathering new points from GitHub, however he expects the Ok Prize mission to reply the query quickly.
βAs we get extra runs of the factor, weβll have a greater sense,β he informed Trendster, βas a result of we count on folks to adapt to the dynamics of competing on this each few months.β
Techcrunch occasion
San Francisco
|
October 27-29, 2025
It would seem to be an odd place to fall brief, given the wide selection of AI coding instruments already publicly accessible β however with benchmarks changing into too straightforward, many critics see tasks just like the Ok Prize as a essential step towards fixing AIβs rising analysis downside.
βIβm fairly bullish about constructing new checks for current benchmarks,β says Princeton researcher Sayash Kapoor, who put ahead an identical concept in a latest paper. βWith out such experiments, we are able toβt truly inform if the difficulty is contamination, and even simply focusing on the SWE-Bench leaderboard with a human within the loop.β
For Konwinski, itβs not only a higher benchmark, however an open problem to the remainder of the business. βIf you happen to take heed to the hype, itβs like we must be seeing AI medical doctors and AI attorneys and AI software program engineers, and thatβs simply not true,β he says. βIf we are able toβt even get greater than 10% on a contamination free SWE-Bench, thatβs the truth examine for me.β





