The US Department of Defense formalised agreements with seven artificial intelligence companies this week, giving SpaceX, OpenAI, Google, NVIDIA, Reflection AI, Microsoft, and Amazon Web Services access to its classified and top-secret networks. Conspicuously absent from the list: Anthropic, whose tools are so popular with actual Pentagon staff that internal users have been given six months to stop using them. The gap between what the institution wants and what its employees actually prefer tells you something important about how this conflict is playing out.
The dispute is not primarily technical. The Pentagon labelled Anthropic a "supply-chain risk" in March, following an impasse over the guardrails the company places on how its AI models can be used in military contexts. Anthropic, which has built its commercial identity around constitutional AI and structured safety constraints, has declined to remove those constraints for defense applications. The military, which needs flexibility in how it deploys AI across planning, logistics, and targeting functions, has decided that flexibility matters more than the safety layer. A lawsuit followed the designation.
What gives this story additional texture is the Mythos angle. Pentagon Chief Technology Officer Emil Michael described Anthropic's latest model, which has advanced cyber capabilities, as a "separate national security moment." US officials are concerned that Mythos could "supercharge" hacking operations, and the uncertainty about who has access to the preview version of the model appears to have hardened the Pentagon's position. This creates an uncomfortable paradox for Anthropic: the more capable and differentiated its AI becomes, the more the military simultaneously wants it and fears it.
Meanwhile, the company that did make the approved list includes Reflection AI, a startup founded only recently that raised $2 billion in October. Its venture backer, 1789 Capital, counts Donald Trump Jr. as a partner and investor. The inclusion of Reflection alongside established players like Google and Microsoft while Anthropic remains excluded has drawn attention to the politics underneath the stated security rationale.
The Pentagon says it is trying to avoid "vendor lock" by spreading contracts across multiple providers. That framing is accurate as far as it goes: GenAI.mil, the military's main AI platform, has been adopted by more than 1.3 million personnel across the Defense Department in just five months of operation, and the pace at which new entrants are being onboarded has dropped from eighteen months to under three. But vendor diversification and vendor exclusion based on safety policy are different things, and the current situation is plainly the latter.
There are signals that the standoff may not be permanent. President Trump said last week that Anthropic was "shaping up" in the eyes of his administration, a comment widely read as suggesting that a negotiated resolution remains possible. The White House has reportedly been facilitating conversations between Anthropic and Pentagon officials, exploring whether Mythos could be integrated for cybersecurity defence even while the broader access dispute continues. Prediction markets that track this dispute are pricing the probability of a resolution at near-certainty.
What the episode reveals is something that will matter well beyond this particular contract: AI companies that built their reputations on principled limits are discovering that those principles carry real commercial costs when the buyer in question is the world's largest military. Whether Anthropic bends, whether the Pentagon relents, or whether a compromise is found that neither side fully endorses, the underlying tension between AI safety architecture and institutional deployment demands is not going away.