Anthropic submitted two sworn declarations to a California federal court docket late Friday afternoon, pushing again on the Pentagon’s assertion that the AI firm poses an “unacceptable danger to nationwide safety” and arguing that the federal government’s case depends on technical misunderstandings and claims that have been by no means truly raised in the course of the months of negotiations that preceded the dispute.
The declarations have been filed alongside Anthropic’s reply temporary in its lawsuit towards the Division of Protection and are available forward of a listening to this coming Tuesday, March 24, earlier than Decide Rita Lin in San Francisco.
The dispute traces again to late February, when President Trump and Protection Secretary Pete Hegseth publicly declared they have been chopping ties with Anthropic after the corporate refused to permit unrestricted navy use of its AI know-how.
The 2 individuals who submitted the declarations are Sarah Heck, Anthropic’s Head of Coverage, and Thiyagu Ramasamy, the corporate’s Head of Public Sector.
Heck is a former Nationwide Safety Council official who labored on the White Home below the Obama administration earlier than shifting to Stripe after which Anthropic, the place she runs the corporate’s authorities relationships and coverage work. She was personally current on the February 24 assembly the place CEO Dario Amodei sat down with Protection Secretary Hegseth and the Pentagon’s Below Secretary Emil Michael.
In her declaration, Heck calls out what she describes as a central falsehood within the authorities’s filings: that Anthropic demanded some sort of approval position over navy operations. That declare, she says, merely isn’t true. “At no time throughout Anthropic’s negotiations with the Division did I or some other Anthropic worker state that the corporate needed that sort of position,” she wrote. She additionally factors out that the Pentagon’s concern about Anthropic probably disabling or altering its know-how mid-operation was by no means raised throughout negotiations. As a substitute, she says, it appeared for the primary time within the authorities’s court docket filings, which gave Anthropic no alternative to reply.
One other element in Heck’s declaration positive to attract consideration is that on March 4 — the day after the Pentagon formally finalized its supply-chain danger designation towards Anthropic — Below Secretary Michael emailed Amodei to say the 2 sides have been “very shut” on the 2 points the federal government now cites as proof that Anthropic is a nationwide safety risk: its positions on autonomous weapons and mass surveillance of Individuals.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
The e-mail, which Heck attaches as an exhibit to her declaration, additionally raises questions on Michael’s later public statements. The subsequent day, Amodei revealed a press release saying the corporate had been having “productive conversations” with the Pentagon. The day after that, Michael posted on X that “there is no such thing as a lively Division of Warfare negotiation with Anthropic.” Per week after that, he informed CNBC there was “no likelihood” of renewed talks. Heck’s level seems to be: If Anthropic’s stance on these two points is what makes it a nationwide safety risk, why was the Pentagon’s personal official saying the 2 sides have been almost aligned on precisely these points simply days after the designation was finalized?
Ramasamy brings a unique sort of experience to the case. Earlier than becoming a member of Anthropic in 2025, he spent six years at Amazon Internet Providers managing AI deployments for presidency prospects, together with categorised environments. At Anthropic, he’s credited with constructing the workforce that introduced its Claude fashions into nationwide safety and protection settings, together with the $200 million contract with the Pentagon introduced final summer season.
His declaration takes on the federal government’s declare that Anthropic might theoretically intervene with navy operations by disabling the know-how or in any other case altering the way it behaves, which Ramasamy says isn’t technically potential. Per his telling, as soon as Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no entry to it; there is no such thing as a distant kill swap, no backdoor, and no mechanism to push unauthorized updates. Any sort of “operational veto” is a fiction, he suggests, explaining {that a} change to the mannequin would require the Pentagon’s specific approval and motion to put in. Anthropic, he says, can’t even see what authorities customers are typing into the system, not to mention extract that information.
Ramasamy additionally disputes the federal government’s declare that Anthropic’s hiring of overseas nationals makes the corporate a safety danger. He notes that Anthropic staff have undergone U.S. authorities safety clearance vetting — the identical background test course of required for entry to categorised info, including in his declaration that “to my information,” Anthropic is the one AI firm the place cleared personnel truly constructed the AI fashions designed to run in categorised environments.
Anthropic’s lawsuit argues that the supply-chain danger designation — the primary ever utilized to an American firm — quantities to authorities retaliation for the corporate’s publicly said views on AI security, in violation of the First Modification.
The federal government, in a 40-page submitting earlier this week, rejected that framing completely, saying that Anthropic’s refusal to permit all lawful navy makes use of of its know-how was a enterprise determination, not protected speech, and that the designation was a simple nationwide safety name, not punishment for the corporate’s views.
The Pentagon case shouldn’t be the one authorized matter on Anthropic’s docket this Tuesday. Individually on Friday, a federal decide tentatively dominated that Reddit’s lawsuit towards the corporate — which accuses Anthropic of scraping its content material with out permission to coach its AI — needs to be despatched again to state court docket, the place Reddit initially filed it final June. A listening to to finalize that call can be scheduled for Tuesday.

