OpenAI introduced Tuesday the launch of two open-weight AI reasoning fashions with related capabilities to its o-series. Each are freely out there to obtain from the net developer platform, Hugging Face, the corporate mentioned, describing the fashions as βstate-of-the-artβ when measured throughout a number of benchmarks for evaluating open fashions.
The fashions are available in two sizes: a bigger and extra succesful gpt-oss-120b mannequin that may run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b mannequin that may run on a shopper laptop computer with 16GB of reminiscence.
The launch marks OpenAIβs first βopenβ language mannequin since GPT-2, which was launched greater than 5 years in the past.
In a briefing, OpenAI mentioned its open fashions can be able to sending advanced queries to AI fashions within the cloud, as Trendster beforehand reported. Meaning if OpenAIβs open mannequin just isn’t able to a sure job, akin to processing a picture, builders can join the open mannequin to one of many firmβs extra succesful closed fashions.
Whereas OpenAI open-sourced AI fashions in its early days, the corporate has usually favored a proprietary, closed-source growth method. The latter technique has helped OpenAI construct a big enterprise promoting entry to its AI fashions through an API to enterprises and builders.
Nonetheless, CEO Sam Altman mentioned in January he believes OpenAI has been βon the unsuitable facet of historical pastβ in the case of open sourcing its applied sciences. The corporate right this moment faces rising stress from Chinese language AI labs β together with DeepSeek, Alibabaβs Qwen, and Moonshot AI βwhich have developed a number of of the worldβs most succesful and fashionable open fashions. (Whereas Meta beforehand dominated the open AI area, the corporateβs Llama AI fashions have fallen behind within the final 12 months.)
In July, the Trump Administration additionally urged U.S. AI builders to open supply extra know-how to advertise world adoption of AI aligned with American values.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
With the discharge of gpt-oss, OpenAI hopes to curry favor with builders and the Trump Administration alike, each of which have watched the Chinese language AI labs rise to prominence within the open supply area.
βGoing again to once we began in 2015, OpenAIβs mission is to make sure AGI that advantages all of humanity,β mentioned OpenAI CEO Sam Altman in an announcement shared with Trendster. βTo that finish, we’re excited for the world to be constructing on an open AI stack created in the USA, based mostly on democratic values, out there without spending a dime to all and for broad profit.β
How the fashions carried out
OpenAI aimed to make its open mannequin a pacesetter amongst different open-weight AI fashions, and the corporate claims to have performed simply that.
On Codeforces (with instruments), a aggressive coding take a look at, gpt-oss-120b and gpt-oss-20b rating 2622 and 2516, respectively, outperformed DeepSeekβs R1 whereas underperforming o3 and o4-mini.
On Humanityβs Final Examination, a difficult take a look at of crowd-sourced questions throughout a wide range of topics (with instruments), gpt-oss-120b and gpt-oss-20b rating 19% and 17.3%, respectively. Equally, this underperforms o3 however outperforms main open fashions from DeepSeek and Qwen.
Notably, OpenAIβs open fashions hallucinate considerably greater than its newest AI reasoning fashions, o3 and o4-mini.
Hallucinations have been getting extra extreme in OpenAIβs newest AI reasoning fashions, and the corporate beforehand mentioned it doesnβt fairly perceive why. In a white paper, OpenAI says that is βanticipated, as smaller fashions have much less world data than bigger frontier fashions and have a tendency to hallucinate extra.β
OpenAI discovered that gpt-oss-120b and gpt-oss-20b hallucinated in response to 49% and 53% of questions on PersonQA, the corporateβs in-house benchmark for measuring the accuracy of a mannequinβs data about folks. Thatβs greater than triple the hallucination price of OpenAIβs o1 mannequin, which scored 16%, and better than its o4-mini mannequin, which scored 36%.
Coaching the brand new fashions
OpenAI says its open fashions had been educated with related processes to its proprietary fashions. The corporate says every open mannequin leverages mixture-of-experts (MoE) to faucet fewer parameters for any given query, making it run extra effectively. For gpt-oss-120b, which has 117 billion complete parameters, OpenAI says the mannequin solely prompts 5.1 billion parameters per token.
The corporate additionally says its open mannequin was educated utilizing high-compute reinforcement studying (RL) β a post-training course of to show AI fashions proper from unsuitable in simulated environments utilizing massive clusters of Nvidia GPUs. This was additionally used to coach OpenAIβs o-series of fashions, and the open fashions have the same chain-of-thought course of during which they take extra time and computational sources to work by means of their solutions.
Because of the post-training course of, OpenAI says its open AI fashions excel at powering AI brokers, and are able to calling instruments akin to net search or Python code execution as a part of its chain-of-thought course of. Nonetheless, OpenAI says its open fashions are text-only, which means they will be unable to course of or generate pictures and audio like the corporateβs different fashions.
OpenAI is releasing gpt-oss-120b and gpt-oss-20b underneath the Apache 2.0 license, which is mostly thought of probably the most permissive. This license will enable enterprises to monetize OpenAIβs open fashions with out having to pay or acquire permission from the corporate.
Nonetheless, not like absolutely open supply choices from AI labs like AI2, OpenAI says it is not going to be releasing the coaching information used to create its open fashions. This resolution is no surprise provided that a number of energetic lawsuits in opposition to AI mannequin suppliers, together with OpenAI, have alleged that these corporations inappropriately educated their AI fashions on copyrighted works.
OpenAI delayed the discharge of its open fashions a number of instances in latest months, partially to handle security considerations. Past the corporateβs typical security insurance policies, OpenAI says in a white paper that it additionally investigated whether or not dangerous actors might fine-tune its gpt-oss fashions to be extra useful in cyber assaults or the creation of organic or chemical weapons.
After testing from OpenAI and third-party evaluators, the corporate says gpt-oss could marginally improve organic capabilities. Nonetheless, it didn’t discover proof that these open fashions might attain its βexcessive functionalityβ threshold for hazard in these domains, even after fine-tuning.
Whereas OpenAIβs mannequin seems to be state-of-the-art amongst open fashions, builders are eagerly awaiting the discharge of DeepSeek R2, its subsequent AI reasoning mannequin, in addition to a brand new open mannequin from Metaβs new superintelligence lab.





