A federal choose on Thursday quickly blocked the Trump administration from labeling Anthropic a “provide chain danger” and reducing off the factitious intelligence agency’s entry to federal contracts.
US District Decide Rita Lin granted Anthropic’s request for a preliminary injunction, discovering that the Trump administration’s “broad punitive measures” in opposition to the corporate “had been possible illegal” and will “cripple Anthropic.”
“Nothing within the governing statute helps the Orwellian notion that an American firm could also be branded a possible adversary and saboteur of the US for expressing disagreement with the federal government,” Lin wrote in her ruling.
(Disclosure: Ziff Davis, CNET’s mother or father firm, in 2025 filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)
The dispute facilities on the Pentagon’s demand to make use of Anthropic’s Claude AI for “all lawful functions,” whereas Anthropic wished to ban the army from utilizing it for mass home surveillance or for totally autonomous weapons methods. After Anthropic refused to satisfy the federal government’s calls for, President Donald Trump and Secretary of Protection Pete Hegseth mentioned they’d declare the corporate a “provide chain danger,” prohibiting using its merchandise in protection contract work.
Anthropic responded with a lawsuit filed earlier this month in federal courtroom difficult the designation, calling it an “unprecedented and illegal” assault on the corporate’s proper to free speech.
Lin wrote that the administration’s measures do not seem to mirror the federal government’s nationwide safety pursuits however somewhat appear punitive in nature.
“If the priority is the integrity of the operational chain of command, the Division of Battle might simply cease utilizing Claude. As an alternative, these measures seem designed to punish Anthropic,” Lin wrote.
Lin additionally delayed her order for one week to permit the Pentagon to hunt a keep of the order.
Anthropic mentioned in a press release that it was “grateful to the courtroom for shifting swiftly, and happy they agree Anthropic is more likely to succeed on the deserves. Whereas this case was obligatory to guard Anthropic, our prospects, and our companions, our focus stays on working productively with the federal government to make sure all Individuals profit from protected, dependable AI.”
The White Home and Pentagon did not instantly reply to a request for remark.

