Anthropic and the Pentagon are reportedly in contention over the use of Claude.

According to a new report from Axios, the Pentagon is pressuring AI companies to allow the U.S. military to use their technology for “any lawful purpose,” but Anthropic is refusing to do so.

It is reported that the government is making the same request to Open AI, Google, and xAI. An anonymous Trump administration official told Axios that one of these companies agreed and the other two appear to have shown some flexibility.

Meanwhile, humanity is known to be the most resistant. In response, the US Department of Defense appears to be threatening to suspend a $200 million contract with the AI ​​company.

Last January, the Wall Street Journal reported that there was significant disagreement between Anthropic and Pentagon officials over how the Claude model could be used. WSJ later revealed that Claude was used in the U.S. military operation to arrest Venezuelan President Nicolas Maduro at the time.

Anthropic did not immediately respond to TechCrunch’s request for comment.

A company spokesperson told Axios that the company “has not had discussions with the War Department about using Claude for specific operations,” but is instead “focused on specific usage policy questions, such as tight limits on fully autonomous weapons and large-scale domestic surveillance.”