← Front Page
AI Daily
Security • March 22, 2026

The Pentagon Said It Was Nearly Aligned With Anthropic. Then Trump Declared the Deal Dead.

By AI Daily Editorial • March 22, 2026

A court filing unsealed this week contains a detail that cuts through the official narrative of the Anthropic-Pentagon dispute: the Department of Defense told Anthropic, in writing, that the two sides were "nearly aligned" on contract terms — and it did so approximately one week before President Trump publicly declared the relationship finished and directed federal agencies to stop using Anthropic's tools. The contradiction is not small. Either the DoD's own negotiators did not know what the president was about to announce, or the "nearly aligned" message was a negotiating tactic, or the decision to end the relationship was made at a political level that bypassed the people actually doing the work of reaching an agreement.

The court filing emerged from Anthropic's ongoing legal challenge to its supply-chain risk designation — the label the Pentagon attached to the company after talks broke down, which effectively bars it from federal contracts. Anthropic has argued in court that the designation was politically motivated and procedurally irregular. The "nearly aligned" document strengthens that argument considerably. It is difficult to square a good-faith finding that two parties are nearly aligned on terms with a simultaneous or near-simultaneous determination that one of those parties poses an unacceptable risk to national security.

The DoD's public position has been harder-edged. A filing from last week characterised Anthropic's "red lines" — its contractual refusals to allow use of Claude for autonomous lethal weapons and mass domestic surveillance — as making the company "an unacceptable risk to national security." That framing positions Anthropic's safety commitments not as a negotiating constraint to be worked around but as a fundamental disqualifier. The Pentagon is simultaneously building alternative AI capabilities to replace Anthropic, according to reporting from TechCrunch, which suggests the practical intent is replacement rather than resumed negotiation.

What this dispute is actually about is a question that keeps clarifying itself as more information emerges. The surface issue — whether a government contractor must allow unrestricted military use of its technology — has an obvious answer in most procurement contexts: if the government needs the capability unconditionally, it buys from a supplier that provides it unconditionally, or it builds the capability itself. The harder question is whether AI companies can maintain safety commitments as contractual terms with national security clients, or whether those clients will always be able to characterise such commitments as unacceptable restrictions. The answer established by this dispute will shape how every AI company approaches government contracts for the next decade.

Anthropic's position is not cost-free. The company has lost a significant federal customer, faces ongoing litigation, and is watching the Pentagon systematically build alternatives using its competitors' models. Dario Amodei has been publicly more conciliatory than Hegseth — sources reported him in separate negotiations with the under-secretary for research — but the court record suggests those parallel diplomacy efforts have not changed the official trajectory. OpenAI, which signed a Pentagon deal weeks after Anthropic was blacklisted, took a different approach: negotiating safeguards rather than insisting on them as preconditions.

The political context matters. The Trump administration's simultaneous push for a national AI framework that preempts state regulation and its simultaneous effort to use supply-chain risk designations to discipline AI companies that maintain safety restrictions on government use are not unrelated phenomena. Together they describe an approach to AI governance in which companies that cooperate with expansive government access get federal contracts, and companies that maintain independent safety standards get treated as national security risks. That is a policy, and its consequences for the long-term development of AI safety practices in American industry will be significant, regardless of how the Anthropic litigation resolves.

Sources