The word choice was deliberate. When Pentagon chief technology officer Emil Michael told reporters last week that allowing Claude into the defence supply chain would "pollute" it, he wasn't reaching for a neutral term. Pollutants are contaminants — foreign, dangerous, something you clean out. For a company founded in San Francisco by former members of OpenAI, being cast not as an overly cautious vendor but as an active toxin marks a significant rhetorical shift in what was already an extraordinary dispute.
The background is well-documented: Anthropic refused two specific demands from the Pentagon — that Claude be deployable in fully autonomous weapons systems, and that it be available for large-scale domestic surveillance of Americans. In late February, Defense Secretary Pete Hegseth formally designated Anthropic a Supply Chain Risk to National Security, a label previously reserved for foreign adversaries and compromised vendors, not domestic US companies. The official notification to Anthropic's leadership followed on 5 March.
On 9 March, Anthropic filed two simultaneous lawsuits in California and Washington DC, calling the designation "unprecedented and unlawful" and framing it as direct retaliation for refusing to remove safety constraints. The company also published a statement on its website explaining where it stands — including its reasoning for calling the relevant department "the Department of War," the name the current administration has restored to the Pentagon.
Now the legal fight is escalating on both fronts. On 12 March, Anthropic asked an appeals court for an emergency stay — a temporary halt to the designation's enforcement while the full case proceeds. The same day, Emil Michael's "pollutant" comments went public. Both sides are hardening. The Pentagon shows no sign of moderating its position; Anthropic is burning through legal options to slow the designation's practical effects while the courts deliberate.
One detail keeps surfacing in reporting, and it's worth dwelling on: even as the ban took effect, Claude was reportedly still being used in US military operations related to Iran. That operational reality — the Pentagon apparently couldn't immediately unwire itself from a vendor it had just blacklisted — underscores how abruptly the designation was applied and how embedded Anthropic's technology had become across defence workflows.
The financial stakes for Anthropic are significant. Its own court filings estimate the designation could cost it hundreds of millions, or even multiple billions, of dollars in 2026 — not just from losing Pentagon contracts directly, but from the cascade effect on defence contractors who must now certify they don't use Claude anywhere in their work with the military.
The core legal question is one courts have rarely had to address: can a US company be designated a national security risk specifically because it refused to remove ethical constraints from its product? If the answer turns out to be yes, the practical message to every AI lab watching is clear: safety guardrails are a commercial liability when the government is the customer. That's the signal the current administration has already sent, regardless of how the lawsuits ultimately resolve.