President Donald Trump said on Friday he will order federal agencies to stop using Anthropic technology. The decision follows the Anthropic Pentagon AI dispute over military access and safety limits. Trump spoke about an hour before a Pentagon deadline tied to Anthropic’s defense contract. He acted nearly a day after Dario Amodei said his firm could not accept Pentagon terms.
The order calls for most agencies to halt the use of Anthropic immediately. Trump granted the Pentagon six months to remove Anthropic tools already embedded in platforms. Trump also warned of “major civil and criminal consequences” tied to the phaseout period. The company did not immediately comment on his remarks.
Contract Friction Centers On Surveillance And Weapons
The dispute began with a clash over how the military may use advanced chatbots. Anthropic builds Claude, which the government uses in some settings. The Defense Secretary Pete Hegseth gave Amodei a Friday deadline. He sought permission for unrestricted military use or threatened contract consequences.
Pentagon officials raised several options during the talks. They included canceling the contract, labeling the firm a supply chain risk, or invoking the Defense Production Act. Anthropic said it wanted narrow assurances on two topics. It sought limits against mass surveillance of Americans and fully autonomous weapons.
The Pentagon’s top spokesman, Sean Parnell, said the military has no interest in mass surveillance. He also said the Pentagon does not want weapons without human involvement. Parnell added that the Pentagon wants to use Anthropic’s model for “all lawful purposes.” Officials have not detailed intended uses beyond that phrasing.
Competitive Fallout Spreads Across The AI Supply Chain
The move is likely to benefit Elon Musk’s chatbot, Grok, according to the AP. The Pentagon plans to give Grok access to classified military networks.
The Pentagon awarded defense AI contracts to four companies last summer. They were Anthropic, Google, OpenAI, and xAI. Each contract is worth up to $200 million. Anthropic was the first approved for classified military networks, working with partners such as Palantir. A Pentagon official said other vendors were close to similar approval.
The controversy has also widened a public debate inside Silicon Valley. Some employees at rivals backed Amodei’s stance in an open letter. That letter said the Pentagon was also pressing other labs for broader terms. It argued that the government could try to play firms against each other.
Musk supported Trump’s position and criticized Anthropic on X. The dispute has drawn in other defense tech figures, including Palmer Luckey of Anduril
Policy Risk Rises As Lawmakers Question The Approach
Some lawmakers raised concerns about the Pentagon’s handling of the standoff. Sen. Mark Warner of Virginia criticized the move and its rhetoric. He questioned whether politics drove national security decisions. Former Pentagon AI leader Gen. Jack Shanahan also publicly pushed back. He said targeting Anthropic may deliver headlines, but it harms everyone.
Shanahan said Claude is already widely used across government, including in classified settings. He called Anthropic’s red lines “reasonable,” the AP reported. He also argued that current large language models are not ready for fully autonomous weapons. That view adds pressure for clearer procurement standards.
In a notable twist, Sam Altman backed Anthropic during a CNBC interview on Friday, AP said. He questioned the Pentagon’s “threatening” posture and said many labs share similar limits. For investors, the episode highlights contract risk tied to safety policies and government leverage. It also signals a tougher procurement climate for AI vendors.