← Front Page
AI Daily
Infrastructure • Monday, March 16, 2026

Stargate Is Building the Future. It Just Might Be Building Last Year's Future.

By AI Daily Editorial • Monday, March 16, 2026

The Stargate Project — OpenAI's $500 billion joint venture with SoftBank, Oracle, and Nvidia — is the most ambitious infrastructure build in the history of the technology industry, and it is moving fast. Five new data centre sites have been announced, taking planned capacity to nearly 7 gigawatts across Texas, New Mexico, Wisconsin, Michigan, and internationally in Norway. The flagship Abilene, Texas facility is already operational, training and serving frontier AI systems. The first $100 billion has been deployed. By almost any measure, Stargate is executing. The question CNBC chose to ask this month — "is Oracle building yesterday's data centres with tomorrow's debt?" — is less flattering, and possibly more important.

The concern is timing. Data centre construction runs on multi-year cycles: land acquisition, permitting, power infrastructure, building, fit-out. The GPU generations Nvidia ships run on roughly 12–18 month cycles. OpenAI has already signalled it no longer wants to expand its Oracle partnership in Abilene because it wants facilities configured for newer Nvidia chip generations than the ones the existing Abilene build was designed around. That's a significant admission: a facility that isn't yet fully operational may already be suboptimal for the hardware that will be available when it comes online. Oracle, which has taken on substantial debt to fund its Stargate buildout, is exposed to this mismatch in ways that OpenAI — which can redirect its commitments more flexibly — is not.

The broader infrastructure buildout has a similar dynamic. TechCrunch's February roundup of billion-dollar infrastructure deals found the same pattern across the industry: commitments being made today for facilities that will come online in 2027 and 2028, to be fitted with hardware that doesn't fully exist yet, to run AI workloads whose nature is still being defined. The scale of the commitment is so large that it must, to some degree, be a bet on continuity — that the AI infrastructure needed in 2028 will look enough like the AI infrastructure needed in 2025 to make the current build choices defensible. Given the pace of change in the field, that's not a certainty.

There's also the power question, which connects to a story covered elsewhere in today's paper. Stargate's Oracle partnership now involves 4.5 gigawatts of planned capacity — a number that makes the energy sourcing challenge concrete in a way that abstract discussions of AI power demand do not. OpenAI has partnered with SB Energy for renewable power supply, but the timelines for renewable generation and the timelines for data centre construction are not perfectly aligned, meaning some Stargate capacity will initially run on grid power that is not clean.

None of this means Stargate is a mistake. The structural bet — that frontier AI will require vastly more compute than currently exists, and that building that compute infrastructure now creates durable competitive advantage — is plausible and widely shared across the industry. What it does mean is that at $500 billion, the margin for timing error is not large, and the assumption that chip generations, power availability, and AI workload requirements will all converge neatly on the schedule the current build implies deserves more scrutiny than it typically receives in the announcements. The facilities being opened are real and impressive. The ones still being planned are bets on a future that nobody has fully specified.

Sources