#1
ByteDanceSeedance 2.0 Pro
#2
KlingKling 3 Pro
#3
KlingKling 2.6
#3
GoogleVeo 3
#5
xaiGrok Imagine 1.0

Swipe for more top models

Regulation

Anthropic Refuses to Build Autonomous Weapons as Pentagon Deadline Looms

February 27, 2026|By Megaton Editorial

The AI company refuses military demands to remove safety guardrails, risking a $200 million contract and potential blacklisting.

Anthropic Refuses to Build Autonomous Weapons as Pentagon Deadline Looms
Share

At 5:01 PM ET Anthropic will officially become what the Pentagon calls a supply chain risk. Defense Secretary Pete Hegseth threatened the designation if the AI company doesn't remove restrictions on lethal autonomous weapons and mass surveillance. The label would effectively blacklist Anthropic from defense contracts and potentially trigger broader federal procurement bans.

CEO Dario Amodei rejected what the Pentagon termed its "final offer" yesterday, telling The Guardian the company "cannot in good conscience" enable fully autonomous weaponry or domestic surveillance capabilities. The standoff marks the first major test of whether AI companies can maintain ethical boundaries when facing severe pressure from their largest potential customer: the U.S. government.

The dispute centers on two specific prohibitions in Anthropic's usage policies. According to TechPolicy.Press, the Pentagon demands removal of restrictions against lethal autonomous weapons systems and mass surveillance of U.S. citizens. Defense officials argue they need flexibility for all lawful use of AI systems. Anthropic maintains that current AI reliability is insufficient for autonomous lethal force.

"The Pentagon cannot let a private company dictate military operational terms," a Defense Department spokesperson told the Houston Chronicle. The agency has threatened to invoke the Defense Production Act, wartime powers that could theoretically force Anthropic's compliance.

Anthropic appears unmoved. The company frames its refusal as protecting democratic principles, even as it faces pressure on multiple fronts. Just yesterday, according to CBC News, Anthropic quietly updated its Responsible Scaling Policy, removing a pledge to pause AI training if safety couldn't be guaranteed. Critics see this as evidence the company is already bending to competitive pressures, making its Pentagon stance more significant.

The timing matters. OpenAI recently removed its prohibition on military use. Google and xAI have signaled willingness to work with defense agencies under broader terms, The Register reports. Palantir has long pursued military contracts without such restrictions.

Subscribe to our newsletter

Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.

U.S. Senators intervened yesterday, demanding the Pentagon cancel its ultimatum. According to India Today, lawmakers called the DoD's attempt to strong-arm a company into enabling mass surveillance both unconstitutional and requiring legislative oversight. "This is a fundamental issue that Congress must address," senators wrote, though they stopped short of proposing specific legislation.

The technical stakes are concrete. Anthropic's Claude models power analysis systems across multiple defense agencies. Losing access would force the Pentagon to migrate to competitors' systems, a process that could take months and cost millions beyond the disputed $200 million contract.

Legal experts see broader consequences. "This sets a precedent for private sector control over military technology," writes Opinio Juris, noting that international law on autonomous weapons remains unsettled. The standoff effectively makes Anthropic the de facto arbiter of what constitutes acceptable military AI use, at least for its systems.

Reason Magazine reports the Pentagon refuses to be contractually bound by surveillance limitations even while claiming it has no interest in mass surveillance. This gap between stated intent and demanded flexibility forms the core of the dispute.

AI companies face increasing pressure to choose between ethical boundaries and government contracts worth hundreds of millions. The Defense Production Act could theoretically force compliance, though using it would set unprecedented precedent for AI systems. Other major AI providers appear ready to fill any void left by Anthropic's potential exit from defense work. Congressional intervention suggests legislative action on military AI use may finally materialize. The dispute reveals unresolved questions about who decides acceptable AI use cases when capabilities outpace regulation.

If Anthropic holds firm past today's deadline, the Pentagon must decide whether to follow through on its threats. Blacklisting a major AI provider would signal that ethical objections to military applications carry real consequences. It would also demonstrate that the Defense Department prioritizes operational flexibility over safety guardrails, even when those guardrails target capabilities the Pentagon claims it doesn't want.

AI will transform warfare. The question is whether the companies building these systems get any say in how.