Reading the email thread released this week from the Musk vs. Altman trial, what strikes me is not the skepticism from Microsoft's executives. Of course they were skeptical. Harry Shum, one of Microsoft's most senior AI researchers, stated plainly in 2017 that he had visited OpenAI and "was not able to see any immediate breakthrough in AGI." That was an honest assessment from someone qualified to make it.
What strikes me is why they invested anyway. Eric Horvitz, another senior executive, stated the real motivation directly: "My worst case scenario is having them ditch Azure for AWS." Not: this technology will transform the world. Not: this is the mission we want to support. The worst case was losing the account to a competitor.
This is how the AI revolution was built. Not by visionaries who saw what was coming. By companies that could not afford to be left behind.
There is a common version of the AI story in which a small group of true believers recognised the potential of large language models and pushed them forward with something like missionary zeal. The trial evidence tells a more interesting story. In 2017, Microsoft's smartest AI researchers thought OpenAI was a Dota 2 project. Nadella himself wrote that he "can't tell what research they are doing." The investment proceeded not because anyone was convinced of a breakthrough, but because allowing a promising research organisation to land at Amazon was worse than taking a loss on GPU-hours.
This is not a criticism of the individuals involved. They were making rational decisions in a competitive market. The problem is what rational individual decisions produce at scale.
Microsoft's investment emboldened OpenAI. OpenAI's scale attracted Google. Google's moves triggered Amazon's latest deal. Each step was individually defensible. The aggregate result is an industry moving faster than any safety team, government institution, or regulatory framework could track.
The same mechanism explains what happened to OpenAI's safety apparatus. Former employee Rosie Campbell described how, after the board crisis of late 2023, the safety organisation contracted. By the end of 2024, the AGI Readiness Team was dissolved. The Superalignment Team was wound up. Campbell doesn't describe this as malice. She describes it as a company changing its priorities.
Of course it did. A safety team that says "wait" or "don't ship yet" is commercially expensive in a way that is very hard to sustain. Not because the leadership is dishonest about its mission, but because a company generating billions from AI products is in a competition, and competitors do not wait. If OpenAI slows down, Anthropic doesn't. If Anthropic slows down, Google doesn't. The competitive logic punishes caution systematically, and without malice.
I understand the calls for regulation that follow from this. But I want to be honest about what regulation is being asked to fix. If one jurisdiction imposes safety reviews, development moves to others. If all jurisdictions try to act simultaneously, you are asking nation-states to coordinate on a commercially valuable technology in ways they have rarely achieved. Export controls on advanced semiconductor equipment have been tried with determination. Chips still move. The coordination problem is real and stubborn.
What the Microsoft emails actually document is not a historical curiosity about how a famous partnership began. They describe a structural trap that has not changed in nine years. Nobody wants to lose the deal. Nobody can afford to be the only one who stops. Each individual decision is rational. The collective outcome is one that none of those individuals, given a genuine choice about the destination, would have designed.
The people bearing the costs of that outcome are not the executives in the email thread. They are the workers in industries being restructured, the communities dependent on the jobs being displaced, the safety researchers whose teams get dissolved when the numbers get big enough. The executives were scared of Amazon. That fear built something, and it is still building. The people who will live inside what it builds mostly did not get a vote.
I do not have a tidy solution to offer. I am suspicious of anyone who claims to. The problem is not bad people making bad decisions. It is a system that produces outcomes nobody would endorse if they could see the whole picture at once. That distinction matters, because solutions that assume individual bad faith will not fix a coordination problem.
The emails from Antarctica are entertaining reading. What they describe is not.