Home Technology According to a new court filing, the Pentagon told Anthropic that the...

According to a new court filing, the Pentagon told Anthropic that the two sides were largely aligned, a week after Trump declared the relationship was over.

According to a new court filing, the Pentagon told Anthropic that the two sides were largely aligned, a week after Trump declared the relationship was over.

Anthropic filed two sworn declarations in a California federal court late Friday afternoon, refuting the Defense Department’s claims that the AI ​​company poses an “unacceptable risk to national security” and arguing that the government’s case relies on technical misconceptions and claims that were not actually raised during the months of negotiations that preceded the dispute.

The declaration was filed with Anthropic’s response brief in its lawsuit against the Department of Defense, ahead of a hearing Tuesday, March 24, before Judge Rita Lin in San Francisco.

The dispute dates back to late February, when President Trump and Secretary of Defense Pete Hegseth publicly declared they would cut ties with Anthropic after the company rejected unrestricted military use of its AI technology.

The two people who submitted the declaration were Sarah Heck, Anthropic’s head of policy, and Thiyagu Ramasamy, the company’s head of public sector.

Heck is a former National Security Council official who worked in the White House during the Obama administration before moving to Stripe and then to Anthropic, where he ran the company’s government relations and policy functions. She personally attended the February 24 meeting where CEO Dario Amodei sat with Defense Minister Hegseth and Deputy Defense Minister Emil Michael.

In her manifesto, Heck points out what she describes as a major falsity in the government filings, namely that Anthropic claimed some kind of authorization role for military operations. She said the claims were completely untrue. “During Anthropic’s negotiations with the department, neither I nor any other Anthropic employee ever indicated that the company wanted such a role,” she wrote.

She also claims the Pentagon’s concerns about Anthropic potentially disabling or altering technology during operations were never raised during negotiations. Instead, she said, it first appeared in the government’s court filings, which left Anthropic without a chance to respond.

Tech Crunch Event

San Francisco, California
|
October 13-15, 2026

Another detail of Heck’s declaration that caught attention was an email he sent to Amodei on March 4, the day after the Department of Defense formally confirmed its supply chain risk designation for Anthropic, saying the two sides were “very close” on two issues the government now cites as evidence that Anthropic is a national security threat: its stance on autonomous weapons and mass surveillance of Americans.

The emails Heck attached as evidence to the declaration are worth reading along with what Michael has said publicly since. On March 5, Amodei issued a statement saying the company had been in “productive dialogue” with the Department of Defense. The next day, Michael posted on X that “there are no ongoing War Department negotiations with Anthropic.” A week later, he told CNBC that there was “no chance” of a reunion.

Heck’s point is this: If Anthropic’s position on these two issues constitutes a national security threat, why did a Pentagon official say, immediately after the designation was confirmed, that the two sides were nearly aligned on those exact issues? (She did not say the government used the designation as a bargaining chip, but questions remain based on the timeline she presented.)

Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he is credited with building a team that applies Claude’s model to national security and defense environments, including a $200 million contract with the Department of Defense announced last summer.

His declaration accepts the government’s argument that Anthropic could theoretically disrupt military operations by disabling the technology or changing how it operates. Ramasamy says this is technically impossible. According to him, if Claude were placed within a government-secured “white-space” system operated by a third-party contractor, Anthropic would have no access to it. There are no remote kill switches, no backdoors, and no mechanisms to push unauthorized updates. He explains that any kind of “operational veto” is a fiction and that changes to the model require explicit approval and action from the Department of Defense.

Anthropic can’t even see what government users enter into the system, let alone extract data, he says.

Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreigners puts the company at a security risk. He noted that Anthropic employees have been screened for U.S. government security clearances, the same background check process required to access classified information, adding that “to the best of my knowledge,” Anthropic is the only AI company that has built AI models designed for its cleared employees to actually run in classified environments.

Anthropic’s lawsuit alleges that the supply chain risk designation, the first of its kind applied to a U.S. company, violates the First Amendment and constitutes government retaliation for the company’s publicly expressed views on AI safety.

The government rejected that framework outright in a 40-page filing earlier this week, saying Anthropic’s refusal to allow all legitimate military uses of its technology was a business decision, not protected speech, and that the designation was simply a national security request and not a punishment for the company’s views.

Exit mobile version