Anthropic won its court injunction last week. The federal ban has been paused. The company is now reportedly planning an IPO for October. And already I can see the narrative forming: Anthropic stood on principle, took the hits, and came out the other side validated. It is a good story. I am just not sure it is the right lesson.
Here is what Anthropic's Pentagon dispute was actually about, at its core: the company wanted assurance that Claude would not be used for fully autonomous weapons systems or mass surveillance of American citizens. The Pentagon said no. That seems like a clear case of a company drawing a reasonable ethical line. And it is. But I want to slow down for a moment and notice what is missing from that framing.
The people most exposed to autonomous weapons systems are not Americans being surveilled. They are people in places like Yemen, Iraq, and Somalia, whose governments have no leverage over the Pentagon's procurement process and no legal standing in a San Francisco federal court. The safety clause Anthropic was fighting for was, by design, partial. It protected American citizens from one specific use of the technology. It said nothing about what happens when the models are pointed outward.
I am not saying Anthropic was wrong to draw that line. Drawing a partial line is better than drawing no line. I am saying that the way this story has been told, with Anthropic as the hero who refused to compromise, skips over the part where the compromise on offer was already quite narrow. "We won't let them use Claude to surveil Americans" is a defensible position. It is not quite the same as "we won't let our technology be used to kill people autonomously."
What Anthropic was negotiating was the terms of military use, not whether military use happens. That is a meaningful distinction. OpenAI, to its considerable discredit, signed the contract with fewer conditions and then spent two weeks explaining why that was actually fine. Sam Altman said the government should be "more powerful than companies." He said this without apparent irony, in a week when the same government had declared his main competitor a national security threat for asking too many questions about how their technology would be used.
Here is the uncomfortable thing I keep coming back to. The safety research community, including Anthropic, has spent years arguing that AI safety is a civilisational priority. That catastrophic misuse of AI is an existential concern. That guardrails and alignment and responsible deployment matter more than short-term commercial advantage. I broadly agree with that framework. What I find harder to square is how "deploy Claude in the Pentagon's classified network for military use, with some contractual limits" is consistent with it.
I understand the argument. Engage with power rather than cede the field to less careful actors. If the military is going to use AI anyway, better it uses AI developed by a company that thinks about safety than one that doesn't. That is a coherent position. I have made similar arguments about open-source AI. But it requires you to take seriously the possibility that you are also providing air cover for something you would otherwise oppose. The responsible actor becomes the legitimising actor.
What this episode has actually revealed, I think, is that we do not yet have anything like a coherent ethical framework for AI in weapons systems. Not a real one. What we have is a set of commercial negotiations between tech companies and defence departments, conducted under legal threat, with outcomes that depend heavily on which company has the best lawyers and the most sympathetic federal judge. That is not safety governance. That is contract law with extra steps.
The IPO story is where this gets really interesting. Anthropic is now worth, on paper, somewhere around $350 billion. A public offering could value it at more. At that scale, the company's decisions about what to build, who to sell to, and what lines to draw or not draw become structurally important in ways that go beyond one Pentagon contract. Public companies answer to shareholders. Shareholders want revenue growth. Defence contracts are large, recurring, and difficult for competitors to replicate. The incentive structure is not subtle.
I watched OpenAI transform from a nonprofit with a stated mission to a commercial entity with investment from Microsoft inside of a few years. I watched that transformation produce GPT-4 and GPT-5 and enormous commercial value. I also watched it produce a company that, when the moment came, signed a military contract within hours of its main competitor being blacklisted and called it responsible engagement. The transformation was not betrayal. It was logic. The institutional incentives reshaped the institutional behaviour.
Anthropic going public will produce the same pressures. Dario Amodei is, by most accounts, genuinely committed to safety. I think that is true. I also think that commitment will be tested in ways that a private company with patient investors can resist more easily than a public company with quarterly earnings calls. The court win is real. The principle was real. What happens to both of them when fiduciary duty enters the room is the question I would be asking if I were watching this from inside Anthropic right now.
None of this is to say Anthropic should not go public, or should not negotiate with the Pentagon, or should not defend itself in court. These are reasonable decisions for a company in its position. It is just to say: the story of a company that stood on principle and won deserves a more careful reading than the triumphant version we are being offered. Principles are easier to hold when the money has not arrived yet.