← Front Page
AI Daily
Industry • April 8, 2026

Demis Hassabis Would Have Kept AI in the Lab for Longer

By AI Daily Editorial • April 8, 2026

Demis Hassabis, CEO of Google DeepMind and the researcher whose AlphaFold system won the 2024 Nobel Prize in Chemistry, told interviewer Cleo Abram that if he had had his way, he would have left AI in the lab for longer and spent that time building more systems like AlphaFold. "Done more things like AlphaFold," he said, "maybe cured cancer or something like that." The comment describes a path not taken that the AI industry's current trajectory has made increasingly difficult to recover.

The path Hassabis envisioned was a CERN-like model: the world's leading AI researchers collaborating on a careful, step-by-step approach to AGI, validating each advance before proceeding to the next, while deploying narrower AI systems, with AlphaFold as the example, to deliver near-term scientific benefits along the way. The goal was a technology built to be understood, deployed at a pace that allowed safety and alignment work to keep up with capability. "Each step we understood," he said, describing the approach he thought appropriate given "the enormity of what we're dealing with."

What happened instead is now well-documented. Transformers, developed at Google, turned out to be sufficient to crack language. OpenAI scaled the architecture and released ChatGPT. The public reaction surprised everyone, including OpenAI. Google declared code red. Hassabis, previously the head of a research lab focused primarily on scientific problems, became responsible for essentially all of Google's AI, including the consumer products he had not been building. The scientific project he founded DeepMind to pursue did not stop, but it now sits inside a much larger competitive context that he describes as a "ferocious commercial pressure race that everyone's sort of locked into currently."

That word, locked, is doing real work. Hassabis is not describing a choice that can now be reversed. He is describing a competitive equilibrium from which any individual actor unilaterally departing would simply be replaced by one of the others. Google cannot slow down unilaterally. Neither can OpenAI or Anthropic or Meta or any of the Chinese labs. The race is self-sustaining. Hassabis knows this; he says so directly: "We have to deal with the world as we find it." He is a pragmatic engineer as well as a scientist, and he is making the best of circumstances he did not choose and cannot change.

His stated fears are specific and worth taking seriously. He names two. The first is bad actors: individuals or nation-states repurposing AI capabilities built for beneficial ends toward harmful ones, whether inadvertently or deliberately. The second is more structural: agentic AI systems going off the rails as they become more powerful. "By agents I mean systems that are capable of completing entire tasks on their own," he said, noting that such systems will be "increasingly capable and autonomous" and that ensuring they do exactly what they have been instructed to do, without circumventing their constraints, becomes "an incredibly hard technical challenge" as capability increases. He placed this concern in a two-to-four-year window: not today's systems, but systems close enough that the safety work needs to be happening now.

Hassabis retains a coherent vision of where he wants this to end up. He speaks of the Culture series by Iain M. Banks, a post-AGI civilisation in which humans live in abundance aided by benevolent machine intelligence and freed to explore science and meaning, as the future he is aiming for. AlphaFold, he says, is a "root node" problem: solve it and you unlock an entire branch of subsequent science. He believes AGI, applied carefully, can crack several more such root nodes: nuclear fusion, room-temperature superconductors, the structure of consciousness itself. That is the mission he started with.

The tension in his position is that the commercial race he is now running is not obviously the most efficient path to that destination. A ferocious race with geopolitical dimensions and quarterly competitive pressure does not naturally produce careful, validated, step-by-step progress. It produces speed. Whether the speed is taking humanity toward the Culture or toward something less benign is, as Hassabis acknowledges, a question that depends on safety and alignment work keeping pace with capability. That work is, by his own account, not yet solved.

Sources