DOD says Anthropic’s ‘red lines’ make it an ‘unacceptable risk to national security’

Must Read
bicycledays
bicycledayshttp://trendster.net
Please note: Most, if not all, of the articles published at this website were completed by Chat GPT (chat.openai.com) and/or copied and possibly remixed from other websites or Feedzy or WPeMatico or RSS Aggregrator or WP RSS Aggregrator. No copyright infringement is intended. If there are any copyright issues, please contact: bicycledays@yahoo.com.

The U.S. Division of Protection mentioned on Tuesday night that Anthropic poses an “unacceptable threat to nationwide safety,” marking the company’s first rebuttal to the AI lab’s lawsuits difficult Protection Secretary Pete Hegseth’s determination final month to label the corporate a provide chain threat. As a part of its complaints, Anthropic had requested the court docket quickly block the DOD from implementing its label.

The crux of the DOD’s argument, made in a 40-page submitting in a California federal court docket, is the priority that Anthropic may “try to disable its expertise or preemptively alter the conduct of its mannequin” earlier than or throughout “warfighting operations” if the corporate “feels that its company ‘purple traces’ are being crossed.”

Anthropic final summer time signed a $200 million contract with the Pentagon to deploy its expertise inside labeled methods. In later negotiations over the phrases of the contract, Anthropic mentioned it didn’t need its AI methods for use for mass surveillance of Individuals, and that the expertise wasn’t prepared to be used in focusing on or firing choices of deadly weapons. The Pentagon contested {that a} personal firm shouldn’t dictate how the navy makes use of expertise.

Chris Mattei, a lawyer specializing in First Modification points and a former Justice Division legal professional, advised Trendster there was no investigation to assist the DOD’s considerations of Anthropic probably disabling or altering its AI fashions throughout warfighting operations. With out that proof, the division’s argument fails to adequately clarify how Anthropic’s negotiating place rendered it an “adversary,” Mattei argued.

“The federal government is relying utterly on conjectural, speculative imaginings to justify a really, very severe authorized step they’ve taken towards Anthropic,” Mattei mentioned. He added the division failed to “articulate a reputable and even understandable rationale for why Anthropic’s refusal to comply with an ‘all lawful use’ provision rendered it a provide chain threat versus a vendor that DOD merely didn’t wish to do enterprise with.”

Many organizations have spoken out towards the DOD’s therapy of Anthropic, arguing that the division might have simply ended its contract. A number of tech corporations and staff — together with from OpenAI, Google, and Microsoft — in addition to authorized rights teams have filed amicus briefs in assist of Anthropic. 

In its lawsuits, Anthropic accused the DOD of infringing on its First Modification rights and punishing the corporate primarily based on ideological grounds.  

Techcrunch occasion

San Francisco, CA
|
October 13-15, 2026

“In some ways, the federal government’s nonsensical arguments are themselves one of the best proof that the administration’s conduct was plainly a retaliatory punishment for Anthropic’s refusal to comply with the federal government’s phrases, which, opposite to the federal government’s temporary, is a type of protected expression,” Mattei advised Trendster.

A listening to on Anthropic’s request for a preliminary injunction is ready for subsequent Tuesday.

Anthropic didn’t instantly reply to a request for remark.

This text has been up to date to incorporate info from Chris Mattei, a constitutional rights lawyer.

Latest Articles

Best early Amazon Spring Sale laptop deals 2026

Observe ZDNET: Add us as a most popular supply on Google.It is that point of yr: Amazon's Huge Spring Sale to...

More Articles Like This