← Front Page
AI Daily
Rows of servers in a vast data center, dimly lit with blue indicator lights
Industry • April 18, 2026

The Other Race: OpenAI and Anthropic Are Fighting About Electricity

By AI Daily Editorial • April 18, 2026

While the public argument between OpenAI and Anthropic plays out in benchmark tables and product launches, a more fundamental contest is happening in energy procurement and data centre construction. OpenAI sent a memo to investors in early April asserting that Anthropic has made a "strategic misstep" in not acquiring enough compute, and predicting that the gap will become visible in the product. It is the kind of thing you say when you believe the other side is winning on points and you want to change the subject to infrastructure.

The numbers in the memo are striking. OpenAI says it is planning 30 gigawatts of compute capacity by 2030. It expects Anthropic to have roughly 7 to 8 gigawatts by the end of 2027. Anthropic has not publicly disputed the figures, and the Daniela Amodei quote that has circulated most in response is characteristically measured: the company has been deliberately "doing more with less," betting on algorithmic efficiency over raw scale.

There is some evidence for that bet paying off. Anthropic's annualised revenue grew from roughly nine billion dollars at the end of 2025 to thirty billion by the end of March 2026, a rate of growth that has unsettled some OpenAI investors. Bloomberg reported in early April that demand for OpenAI shares on secondary markets had softened while interest in Anthropic was running hot. TechCrunch noted that some OpenAI investors were explicitly reconsidering their position. The coding tools have been the main driver of Anthropic's surge, and Claude's share of the enterprise coding market has been the thing OpenAI most visibly failed to anticipate.

But the compute constraint critique has some teeth. Users of Claude have reported rate limiting under heavy load, and developers building on the API have noticed throttling at peak times. The lead creator of Claude Code acknowledged that medium effort is now the default for reasoning: you have to actively request high or maximum effort, whereas previously the model would spend more compute unprompted. An AMD senior AI director publicly observed that the thinking depth of Claude 4.6 had already dropped before Opus 4.7 was released, claiming the number of characters used in the thinking process had declined by around three-quarters. The reply from Anthropic was honest but not entirely reassuring: yes, medium effort is the default now.

This is the tension at the core of Anthropic's position. Its revenue is growing faster than OpenAI's because developers trust its models for serious work. But if the models that developers depend on begin to throttle or degrade under load, that trust erodes quickly. The OpenAI memo was partly competitive posturing, but it was also a calculated attempt to plant a question in enterprise buyers' minds: is Claude reliable at scale?

Greg Brockman, the OpenAI co-founder who is now leading the Codex project, offered a candid admission in a recent interview. He said OpenAI fell behind in coding because it trained on abstract programming competitions rather than the messy, interrupt-filled reality of real codebases. Anthropic, he said, grounded their training in the actual texture of software engineering. He believes OpenAI has now caught up. Anthropic's position is that the gap it opened was not accidental, and that its efficiency-first approach is a structural advantage rather than a temporary lead.

The rivalry is also personal. Dario Amodei left OpenAI in 2021 after disagreements with its direction, and recent reporting suggests that his most fundamental friction was not with Sam Altman but with Brockman, whose approach to staff management and strategic direction he found incompatible with his own. The Wall Street Journal has reported on early incidents at OpenAI in which Amodei considered quitting over the company's proposed relationship with national governments. The two men are now the principal figures behind what are arguably the two most consequential coding AI products of 2026.

The compute argument will not be resolved by a memo. What will settle it is whether Anthropic's inference efficiency can keep pace with demand as it continues to grow, or whether the 7 to 8 gigawatt ceiling becomes a visible constraint before the company can raise and build its way past it. OpenAI is hoping the answer is visible soon. Anthropic is betting it never becomes one.

Sources