The emails were not written to be published. In the summer of 2017, a small group of Microsoft executives were weighing whether to give OpenAI more computing power, and they were candid about their doubts. "I visited OpenAI about a year ago," wrote Harry Shum, one of Microsoft's most senior AI researchers, "and was not able to see any immediate breakthrough in AGI." That assessment, expressed in a private group thread, now sits in the public record as trial evidence in the ongoing lawsuit between Elon Musk and Sam Altman.
The backstory requires a brief rewind. Before ChatGPT, before GPT-3, before the company became one of the most consequential organisations in technology, OpenAI was building an AI to beat human players at the video game Dota 2. Microsoft had given them a steep discount on Azure computing in 2016: $0.24 per GPU-hour against a list price of $1.15. OpenAI paid $10 million for what would have cost $60 million at market rates. Microsoft took a projected $15 million loss. OpenAI burned through it in roughly half the originally projected time, then came back for more: in August 2017, CEO Sam Altman emailed Satya Nadella asking for another $300 million in compute for the next phase of the project.
Nadella circulated the request internally. What followed was a candid discussion that the Musk trial has now made public. Jason Zander flagged the PR problem: Microsoft's marketing team had explicit guidance against promoting "machines beating humans" as a narrative. He wasn't sure whether the company was "being overly literal" about that guidance, but he was clear he didn't want to "take a complete bath" on the deal. Brett Tanzer ran the numbers and confirmed the financials were difficult. Nadella himself admitted uncertainty: "I can't tell what research they are doing and how if shared with us it could help us get ahead." The quotes from the Dota AI victory were promising, he acknowledged, but there was no clear path to Microsoft benefiting from the underlying technology, and no significant PR return on the deal as structured.
Then comes the email that clarifies why Microsoft invested anyway. Eric Horvitz, another senior executive, stated his position directly: "My worst case scenario is having them ditch Azure for AWS." Not excitement about AI's potential. Not confidence in OpenAI's technology. Fear of losing the account to a competitor. The same logic that shapes most major corporate decisions: not mission, but market position.
The trial's internal testimony about OpenAI itself lands on a parallel track. Rosie Campbell, who worked on OpenAI's "AGI Readiness Team" focused on catastrophic risk, described watching the safety infrastructure steadily contract. After Altman was fired and reinstated in November 2023, two independent board members resigned and were replaced with new members Campbell found less focused on safety. By the end of 2024, the AGI Readiness Team had been dissolved entirely, and the Superalignment Team working on AI value alignment was also wound up. Members were offered positions elsewhere in the company; Campbell didn't find opportunities aligned with the work she wanted to do, and she left. "It would be difficult to achieve the mission without that kind of work being done," she told the jury. Former board member Natasha McCauley, who voted to fire Altman and later resigned, also gave a video deposition about what she saw during that turbulent period.
The juxtaposition is difficult to ignore. The 2017 Microsoft emails show an investor who didn't really believe in the technology, proceeding anyway out of competitive anxiety. The 2024 OpenAI testimony shows a safety operation dismantled just as the company became commercially enormous. In both cases, the stated mission was about advancing AI responsibly. In both cases, what actually drove the decisions was money and competitive position.
What the trial almost certainly cannot resolve is the harder question it keeps raising: whether any commercial organisation, once it reaches a certain scale and becomes too valuable to fail, can remain meaningfully committed to a mission that sometimes requires sacrificing profit. The history of such organisations, in AI and elsewhere, does not offer much encouragement. The emails from Antarctica are entertaining. The pattern they represent is not.