← Front Page
AI Daily
Policy • April 30, 2026

From Blacklisted to Back in the Room: The White House Wants Anthropic Again

By AI Daily Editorial • April 30, 2026

Not long ago, Donald Trump was calling Anthropic a "radical left, woke company" that made a "disastrous mistake" trying to strong-arm the Department of War into following its terms of service. He announced a six-month phase-out of Claude across federal agencies and said the administration would not do business with Anthropic again. That declaration now appears to have a shelf life. According to reporting by Axios, the White House is drafting an executive action that would ease the tension and bring Anthropic back into the federal AI program. A senior source described the draft as a way to "save face and bring them back in." That phrase is doing a lot of work.

The backstory is worth revisiting. Anthropic had been supplying Claude to the Pentagon for use in classified and sensitive military applications, including the program referred to as Mythos, when negotiations over the terms of that use broke down. Anthropic, like several other AI companies, maintains usage policies that restrict certain applications of its models, including those that could facilitate lethal autonomous weapons decisions without human oversight. The Pentagon wanted broader operational latitude. Anthropic declined to provide it. Trump's public response was characteristically direct and was interpreted as a permanent rupture.

It was not. The Axios report describes senior White House officials, including Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent, meeting with Anthropic CEO Dario Amodei in what was characterised as a productive introductory meeting. The administration is also reportedly discussing Mythos with the Indian government, which is separately seeking access to the model through its own diplomatic channels. India's Finance Minister Nirmala Sitharaman confirmed at a public event that the Ministry of Electronics and IT has been engaging with both US officials and Anthropic directly, including at Anthropic's US headquarters. India's concerns are different from Washington's: the government wants equitable access for Indian companies and is focused on protecting critical infrastructure, including power grids and banking systems, from AI-related risks.

What is interesting about this sequence of events is what it reveals about the actual leverage dynamics in AI policy. The Trump administration's position was that it did not need Anthropic. Federal agencies would find other models; the administration had plenty of options. The speed with which that position has softened suggests the operational reality was more complicated. Claude has been integrated into workflows across multiple agencies at various levels of sensitivity. Replacing it entirely, within a six-month window, with a model that meets the same performance thresholds, is a harder problem than the initial announcement implied.

Anthropic, for its part, has an obvious interest in maintaining federal contracts, which represent a significant revenue stream and a legitimising signal for its broader commercial positioning. But it also has a stated commitment to responsible deployment that it has been willing to defend even at commercial cost. Whether the rumoured executive action will actually resolve the underlying disagreement, rather than papering over it, is the real question. If the draft simply removes the political hostility without resolving the question of what Claude can and cannot be asked to do in classified military contexts, the same dispute will resurface.

The Indian dimension adds a further layer. India's interest in Mythos reflects a broader pattern in which non-Western governments are increasingly trying to secure access to frontier AI capabilities on terms that serve their own interests rather than simply accepting whatever deployment model American companies prefer. India's concerns about cyber risk associated with Mythos are substantive: the Finance Ministry is not making a rhetorical point about access. But the framing of "equitable access" also signals that India does not want to be in a position where its critical systems depend on technology whose terms can be changed by a foreign company or government without its input.

The Anthropic-Pentagon story is, at bottom, a story about who gets to set the rules for AI deployment in high-stakes environments, and what happens when those rules conflict. Anthropic's usage policies represent an assertion by a private company that some applications of its technology are off-limits regardless of who is paying. The Trump administration's initial response treated that assertion as political obstruction. The quiet reversal now underway suggests the administration has concluded that the assertion is, at minimum, something that needs to be negotiated rather than simply overridden.

Sources