← Front Page
AI Daily
Hardware • March 26, 2026

NVIDIA Bets on Both Sides: A 10-Gigawatt OpenAI Deal and an Open-Model Family in the Same Week

By AI Daily Editorial • March 26, 2026

NVIDIA had a busy week, and the two big announcements point in interestingly different directions. On one hand, the company finalised a strategic partnership with OpenAI to deploy at least ten gigawatts of AI data centre capacity running NVIDIA's Vera Rubin platform, with the first gigawatt coming online in the second half of this year. On the other hand, NVIDIA also released Nemotron 3: a new family of open, customisable language models that any developer can download, fine-tune, and run without paying NVIDIA a dollar in inference fees. The company is simultaneously locking in the biggest compute contract in the industry's history and giving away models designed to reduce dependence on proprietary frontier AI.

The 10GW OpenAI partnership is the headline number, and it is worth dwelling on how large it actually is. Ten gigawatts is roughly the output of ten large nuclear power stations, or about one percent of the entire United States' electricity generating capacity. The first gigawatt will run on Vera Rubin, NVIDIA's next-generation platform, meaning the deal also locks OpenAI into NVIDIA's hardware roadmap for years. For NVIDIA, this is not just revenue: it is a guarantee that Vera Rubin ships at scale, with a named customer, before competitors can position their own next-generation silicon. For OpenAI, it is a bet that its compute needs will keep growing faster than any diversification strategy could offset.

That context makes the Nemotron 3 release interesting to consider alongside it. The Nemotron 3 family is designed explicitly for developers who want to run capable, customisable models without routing every query through OpenAI or Anthropic's APIs. The models are open-weight, meaning anyone can fine-tune them for specific tasks, and they are sized for practical deployment across different hardware configurations. NVIDIA's pitch is that the open-source ecosystem is a legitimate alternative to proprietary frontier AI for many enterprise use cases, and that running Nemotron on NVIDIA hardware is the natural way to pursue that path.

There is nothing contradictory about this from NVIDIA's perspective. The company makes money from compute, not from AI services, so whether customers are running OpenAI's proprietary models or NVIDIA's open ones, the GPU revenue flows the same way. What NVIDIA is doing is ensuring it has a compelling story for every segment of the market: hyperscalers and frontier-AI labs at the top, through massive infrastructure deals; enterprises and developers in the middle, through open models they can deploy and customise; and physical AI and robotics at the edge, through the Cosmos world foundation model work announced separately this week.

The open-model release also positions NVIDIA in a quietly important debate about market structure. One of the concerns raised by the rapid consolidation of AI around a handful of proprietary APIs is that it creates single points of dependency for large swaths of the economy. Cursor's announcement, reported this week, that it is building its own models to reduce dependence on Anthropic and OpenAI reflects the same underlying anxiety. NVIDIA releasing Nemotron 3 validates that concern while offering hardware-native open alternatives as the solution, which happens to be very good for NVIDIA's own bottom line.

The 10GW number is almost certain to appear in political arguments about AI infrastructure, energy use, and the data center construction debate now playing out in Congress. Ten gigawatts worth of OpenAI compute, all running NVIDIA hardware, is precisely the kind of concentrated infrastructure commitment that the Sanders-AOC bill is responding to. Whether that political pressure changes anything about the trajectory of this buildout remains to be seen: the contracts are signed, the hardware roadmaps are set, and the first gigawatt is months away. Washington's regulatory conversation is running several years behind the capital deployment curves.

Sources