A discrepancy between first- and third-party benchmark outcomes for OpenAIβs o3 AI mannequin is elevating questions in regards to the firmβs transparency and mannequin testing practices.
When OpenAI unveiled o3 in December, the corporate claimed the mannequin might reply simply over a fourth of questions on FrontierMath, a difficult set of math issues. That rating blew the competitors away β the next-best mannequin managed to reply solely round 2% of FrontierMath issues accurately.
βIn the present day, all choices on the market have lower than 2% [on FrontierMath],β Mark Chen, chief analysis officer at OpenAI, stated throughout a livestream. βWeβre seeing [internally], with o3 in aggressive test-time compute settings, weβre in a position to recover from 25%.β
Because it seems, that determine was doubtless an higher certain, achieved by a model of o3 with extra computing behind it than the mannequin OpenAI publicly launched final week.
Epoch AI, the analysis institute behind FrontierMath, launched outcomes of its unbiased benchmark assessments of o3 on Friday. Epoch discovered that o3 scored round 10%, properly under OpenAIβs highest claimed rating.
OpenAI has launched o3, their extremely anticipated reasoning mannequin, together with o4-mini, a smaller and cheaper mannequin that succeeds o3-mini.
We evaluated the brand new fashions on our suite of math and science benchmarks. Ends in thread! pic.twitter.com/5gbtzkEy1B
β Epoch AI (@EpochAIResearch) April 18, 2025
That doesnβt imply OpenAI lied, per se. The benchmark outcomes the corporate revealed in December present a lower-bound rating that matches the rating Epoch noticed. Epoch additionally famous its testing setup doubtless differs from OpenAIβs, and that it used an up to date launch of FrontierMath for its evaluations.
βThe distinction between our outcomes and OpenAIβs is perhaps because of OpenAI evaluating with a extra highly effective inside scaffold, utilizing extra test-time [computing], or as a result of these outcomes have been run on a unique subset of FrontierMath (the 180 issues in frontiermath-2024-11-26 vs the 290 issues in frontiermath-2025-02-28-private),β wrote Epoch.
In accordance with a put up on X from the ARC Prize Basis, a company that examined a pre-release model of o3, the general public o3 mannequin βis a unique mannequin [β¦] tuned for chat/product use,β corroborating Epochβs report.
βAll launched o3 compute tiers are smaller than the model we [benchmarked],β wrote ARC Prize. Usually talking, greater compute tiers might be anticipated to realize higher benchmark scores.
OpenAIβs personal Wedna Zhou, a member of the technical workers, stated throughout a livestream final week that the o3 in manufacturing is βextra optimized for real-world use circumstancesβ and pace versus the model of o3 demoed in December. In consequence, it might exhibit benchmark βdisparities,β he added.
β[W]eβve accomplished [optimizations] to make the [model] extra price environment friendly [and] extra helpful,β Zhou stated. βWe nonetheless hope that β we nonetheless suppose that β it is a a lot better mannequin.β
Granted, the truth that the general public launch of o3 falls in need of OpenAIβs testing guarantees is a little bit of a moot level, because the firmβs o3-mini-high and o4-mini fashions outperform o3 on FrontierMath, and OpenAI plans to debut a extra highly effective o3 variant, o3-pro, within the coming weeks.
It’s, nonetheless, one other reminder that AI benchmarks are greatest not taken at face worth β significantly when the supply is an organization with providers to promote.
Benchmarking βcontroversiesβ have gotten a standard prevalence within the AI business as distributors race to seize headlines and mindshare with new fashions.
In January, Epoch was criticized for ready to reveal funding from OpenAI till after the corporate introduced o3. Many lecturers who contributed to FrontierMath werenβt knowledgeable of OpenAIβs involvement till it was made public.
Extra not too long ago, Elon Muskβs xAI was accused of publishing deceptive benchmark charts for its newest AI mannequin, Grok 3. Simply this month, Meta admitted to touting benchmark scores for a model of a mannequin that differed from the one the corporate made obtainable to builders.
Up to date 4:21 p.m. Pacific: Added feedback from Wedna Zhou, a member of the OpenAI technical workers.