Technology
Trump blocks Anthropic from federal contracts over AI safety rules
The administration designated the AI company a supply-chain risk after it refused to remove restrictions on autonomous weapons and mass surveillance.

The administration designated the AI company a supply-chain risk after it refused to remove restrictions on autonomous weapons and mass surveillance.
President Trump ordered all federal agencies to immediately stop using Anthropic's AI technology Thursday, escalating a dispute that began when the company refused to lift its safety restrictions for Pentagon use. Defense Secretary Pete Hegseth formally designated Anthropic a supply-chain risk, blacklisting the firm from military contracts and forcing agencies to find alternatives within six months.
The conflict centers on Anthropic's terms of service, which explicitly prohibit using its AI models for autonomous weapons systems and mass surveillance. According to The Washington Post, negotiations broke down when Anthropic refused to waive these restrictions for classified military networks. Trump called the company radical left in his directive, while threatening civil and criminal consequences if Anthropic doesn't cooperate with the phase-out.
Hours after the ban, OpenAI announced a Pentagon deal to supply AI to classified military networks. CEO Sam Altman said the agreement preserves similar bans on autonomous weapons and mass surveillance, asserting the Pentagon agreed to these terms. The timing suggests the administration may be using competitive pressure rather than abandoning safety restrictions entirely.
The General Services Administration moved quickly to implement Trump's order, removing Anthropic from its USAi.gov platform and Multiple Award Schedule procurement channels. Administrator Edward C. Forst stated the agency rejects attempts to politicize national security work, aligning with the administration's characterization of AI safety measures as political rather than technical.
This marks the first time an AI company has been designated a national security threat specifically for maintaining ethical guardrails. The Pentagon's supply-chain risk designation typically applies to foreign adversaries or companies with compromised hardware. American firms refusing to modify their terms of service have never received this designation before.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
The ban extends beyond direct federal contracts. According to CBS News, the designation blocks any contractors working with the government from using Anthropic's technology, affecting companies throughout the defense industrial base. Companies with existing Anthropic integrations now face a choice between their AI infrastructure and their government contracts.
Anthropic has not publicly responded to the ban or the threatened consequences. The company's constitutional AI approach, which builds safety considerations directly into model training, would make removing specific restrictions technically difficult even if the company wanted to comply.
Sam Altman's public backing of Anthropic's safety stance, despite OpenAI benefiting from the competitor's blacklisting, suggests the AI industry recognizes the precedent being set. If safety restrictions become disqualifying factors for government contracts, it will change how American AI companies approach development.
The six-month transition timeline creates immediate pressure on agencies currently using Claude, Anthropic's flagship model. The State Department and Department of Energy had been testing the technology for document analysis and research applications. These uses fall well within Anthropic's acceptable use policies but are now prohibited under the executive order.
Federal contractors must audit their AI stacks for any Anthropic dependencies by September. Agencies using Claude for non-military applications still face mandatory migration. The GSA's removal of Anthropic from procurement channels affects state and local governments that use federal contracts. OpenAI's deal terms could establish a new baseline for Pentagon AI partnerships. The supply-chain risk designation may affect Anthropic's ability to work with allied governments.
The administration hasn't specified what cooperation would satisfy the phase-out requirements or lift the threatened consequences. The Pentagon begins integrating OpenAI's technology into classified systems, testing whether Altman's stated safety guarantees hold under military deployment.