The sequence of events in late February and early March reads like a compressed history of how not to manage a strategic relationship. Anthropic, which held a $200 million Pentagon contract and had been negotiating the terms under which its AI could be used for military purposes, drew a line: it would not allow its models to enable domestic mass surveillance or fully autonomous weapons systems. The Defence Department, under Secretary Pete Hegseth, responded by designating Anthropic a supply-chain risk to national security. The White House then directed every federal agency to immediately cease using Anthropic's technology. Hours later, OpenAI announced a deal with the same Pentagon — one that Sam Altman later admitted "looked opportunistic and sloppy."
Dario Amodei's reaction was blunt. He publicly called OpenAI's messaging around its military deal "straight up lies," a phrase that is unusual in an industry where founders typically avoid direct attacks on competitors. The specific dispute appears to be over how OpenAI characterised the nature of its agreement and what safeguards it actually secured. OpenAI's published terms include prohibitions on domestic mass surveillance and requirements for human oversight of lethal force decisions — which, on paper, look similar to the positions Anthropic had been negotiating. Amodei's implication is that the framing obscured what was actually agreed to.
Altman, for his part, acknowledged that the deal was "definitely rushed" and that the optics were poor. He later amended some of the publicly stated terms, adding limits on surveillance applications, in an apparent response to the criticism. The initial announcement had been made without the kind of careful public communications that a contract of this political sensitivity warranted. Whether the underlying terms were also inadequate, or merely badly explained, remains disputed.
Meanwhile Anthropic did not simply accept the blacklisting. Dario Amodei went back to the negotiating table with Emil Michael, the under-secretary of defence for research and engineering, seeking to reach terms that would allow Anthropic's models back into federal use. The FCC chair weighed in publicly, saying Anthropic had "made a mistake" in its Pentagon talks and should "correct course" — an unusual intervention from a telecommunications regulator, suggesting the political stakes had spread well beyond defence procurement. By early March, Google had stepped in to deepen its Pentagon AI relationship in the gap Anthropic left, and Microsoft confirmed Anthropic's products would remain available to non-defence customers through Azure.
What the episode reveals, more than any specific policy question, is how poorly calibrated the AI industry is for the political environment it now operates in. Both OpenAI and Anthropic had, at various points, positioned themselves as thoughtful stewards of powerful technology. The Pentagon saga exposed the limits of that positioning when the technology becomes genuinely strategic. Anthropic tried to negotiate limits on how its AI would be used for war and surveillance — reasonable positions — and discovered that a government with different priorities could simply redesignate it as a security risk. OpenAI moved quickly and discovered that the speed itself became a liability.
The deeper question is what "responsible AI for defence" actually means and who gets to define it. The AI companies' instinct is to set internal policies and negotiate them into contracts. The government's instinct is that when national security is at stake, it sets the terms. Those two positions are not easily reconciled, and the late-February sequence made clear that when they conflict, the government holds most of the cards. Every AI company with federal exposure is now recalibrating what it will and will not accept — and how loudly it will say so in public.