Thirty days ago, Anthropic was a national security threat. On Friday, it was a company preparing for an IPO. The speed of that reversal tells you something important about both the fragility and the resilience of the AI industry's relationship with American government power.
Recap for those who lost the thread: in late February, Defense Secretary Pete Hegseth designated Anthropic a "supply-chain risk to national security" after the company refused to guarantee that its Claude models could be used for fully autonomous weapons systems or mass surveillance of American citizens. The White House followed with a directive ordering every federal agency to stop using Anthropic's technology immediately. OpenAI, to widespread disgust, signed a competing Pentagon contract within hours. Sam Altman later admitted the timing "looked opportunistic and sloppy."
What happened in court was striking. At a San Francisco hearing on March 24, U.S. District Judge Rita Lin told the government's lawyers that the Pentagon's decision appeared to be "an attempt to cripple" Anthropic, and questioned whether the DOD had violated the law. The government's justification was thin: their lawyer said they had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems." That's not evidence of wrongdoing. That's a prediction about hypothetical future behaviour used to justify present destruction.
Judge Lin said she expected to issue an order within days. A court order blocking the ban came through on March 26, pausing the federal government's directive. Anthropic, which had said the blacklisting could cost it billions in government business, had its injunction. The company that was a national security threat on a Tuesday was a normal commercial entity again by the following Thursday.
Then, within 24 hours of that ruling, Bloomberg reported that Anthropic is in early discussions with Wall Street banks about an IPO as soon as October. The potential listing could raise more than $60 billion. The company, which raised at a valuation of up to $350 billion in a November round backed by Microsoft and Nvidia, would race OpenAI to be the first major AI lab to go public.
There is something genuinely strange about watching these two storylines run in parallel. A company facing existential threat from government action one week is planning a public market debut the next. But the logic holds if you look at it from the investor side. The court victory is evidence that the blacklisting was legally questionable. The IPO discussions are a vote of confidence that Anthropic survived the assault intact, and perhaps emerged from it with its reputation as the AI company that refused to bend on weapons autonomy actually enhanced among certain audiences.
What the episode has clarified is the enormous stakes of the question Anthropic was originally asking: can an AI company negotiate limits on military use as a condition of government contracts? The Trump administration's answer was no, and they were willing to torch a $350 billion company to make the point. Anthropic's answer was to sue. The court's answer, at least provisionally, was to side with Anthropic.
That is not nothing. The AI industry has spent years performing careful neutrality on questions of weapons development, surveillance, and military autonomy. The Anthropic case forced a binary choice. And the outcome suggests that at least one federal court thinks "we won't help build fully autonomous killing systems" is a legally defensible commercial position, not a threat to national security.
Whether the IPO actually happens in October depends on market conditions, the outcome of the full litigation, and whether Anthropic can demonstrate the revenue growth public investors will expect. The $60 billion figure is a projection, not a promise. But the fact that it's being discussed at all, after the month Anthropic just had, is the more interesting story. It suggests the company's bet on maintaining safety limits paid off in a way that nobody expected when the blacklist came through in February.
The open question is what happens to the underlying dispute. The injunction pauses the ban; it does not resolve it. There is still a federal government that wanted unlimited access to Claude for military purposes, and a company that said no. That is not a disagreement that a court order makes disappear. It is a negotiation that has been temporarily frozen, and it will eventually thaw.