OpenAI CEO Sam Altman introduced late on Friday that his firm has reached an settlement permitting the Division of Protection to make use of its AI fashions within the divisionβs categorized community.
This follows a high-profile standoff between the DoD β additionally recognized beneath the Trump administration because the Division of Battle β and OpenAIβs rival Anthropic. The Pentagon pushed AI corporations, together with Anthropic, to permit their fashions for use for βall lawful functions,β whereas Anthropic sought to attract a crimson line round mass home surveillance and absolutely autonomous weapons.
In a prolonged assertion launched Thursday, Anthropic CEO Dario Amodei mentioned the corporate βby no means raised objections to explicit navy operations nor tried to restrict use of our expertise in an advert hoc method,β however he argued that βin a slender set of instances, we imagine AI can undermine, quite than defend, democratic values.β
Greater than 60 OpenAI workers and 300 Google workers signed an open letter this week asking their employers to help Anthropicβs place.
After Anthropic and the Pentagon failed to succeed in an settlement, President Donald Trump criticized the βLeftwing nut jobs at Anthropicβ in a social media publish that additionally directed federal businesses to cease utilizing the corporateβs merchandise after a six-month phase-out interval.
In a separate publish, Secretary of Protection Pete Hegseth claimed Anthropic was attempting to βseize veto energy over the operational choices of the USA navy.β Hegseth additionally mentioned he’s designating Anthropic as a supply-chain threat: βEfficient instantly, no contractor, provider, or companion that does enterprise with the USA navy could conduct any business exercise with Anthropic.β
On Friday, Anthropic mentioned it had βnot but obtained direct communication from the Division of Battle or the White Home on the standing of our negotiations,β however insisted it will βproblem any provide chain threat designation in courtroom.β
Techcrunch occasion
Boston, MA
|
June 9, 2026
Surprisingly, Altman claimed in a publish on X that OpenAIβs new protection contract contains protections addressing the identical points that turned a flashpoint for Anthropic.
βTwo of our most essential security rules are prohibitions on home mass surveillance and human duty for the usage of drive, together with for autonomous weapon methods,β Altman mentioned. βThe DoW agrees with these rules, displays them in legislation and coverage, and we put them into our settlement.β
Altman mentioned OpenAI βwill construct technical safeguards to make sure our fashions behave as they need to, which the DoW additionally wished,β and it’ll deploy engineers with the Pentagon βto assist with our fashions and to make sure their security.β
βWe’re asking the DoW to supply these similar phrases to all AI corporations, which in our opinion we expect everybody must be prepared to just accept,β Altman added. βWe have now expressed our sturdy need to see issues de-escalate away from authorized and governmental actions and in the direction of cheap agreements.β
Fortuneβs Sharon Goldman reviews that Altman informed OpenAI workers at an all-hands assembly that the federal government will permit the corporate to construct its personal βsecurity stackβ to stop misuse and that βif the mannequin refuses to do a activity, then the federal government wouldn’t drive OpenAI to make it try this activity.β
Altmanβs publish got here shortly earlier than information broke that the U.S. and Israeli governments have begun bombing Iran, with Trump calling for the overthrow of the Iranian authorities.





