The pharmaceutical industry spends roughly $300 billion a year on research and development. Most of that money goes toward a process — identifying candidate molecules, testing them in silico, validating them in cells, then animals, then humans — that has not changed structurally in decades despite continuous incremental improvement. AI is beginning to change the structure itself, and NVIDIA is positioning itself as the essential infrastructure for that change in much the same way it became indispensable to the AI training boom. The pattern is recognisable: build the compute platform, get the leading labs dependent on it, let the applications proliferate on top.
The most concrete recent signal is NVIDIA's co-innovation lab with Eli Lilly, announced this year, which involves up to $1 billion in combined investment over five years in talent, compute, and infrastructure. The lab's stated goal is to apply AI across Lilly's drug discovery pipeline — from target identification through lead optimisation — at a scale that would not be feasible without dedicated compute infrastructure. Lilly is one of the world's largest pharmaceutical companies, with a strong position in GLP-1 drugs and a pipeline worth defending. That it is willing to commit this level of resource to AI-native drug discovery is a signal that the industry has moved past the pilot phase.
The broader platform play is BioNeMo, NVIDIA's open development framework for AI-driven biology. BioNeMo has attracted adoption from Chai Discovery, Basecamp Research, and Boltz, among others — a set of companies working on protein structure prediction, molecular generation, and related problems that sit at the foundation of drug discovery. The platform connects these models to NVIDIA's compute infrastructure and supports what the company calls "lab-in-the-loop" workflows: AI models that can direct and interpret wet lab experiments rather than just processing existing data. That is a more ambitious integration than most current AI-for-pharma tools achieve.
Roche's deployment is the most concrete example of what industrialised AI drug discovery infrastructure looks like in practice. The Swiss pharmaceutical company is running more than 3,500 NVIDIA Blackwell GPUs across data centres in the US and Europe, supporting everything from biological foundation models to manufacturing digital twins. This is not a research cluster — it is production compute embedded in ongoing operations. The scale suggests that Roche has moved from evaluating AI for drug discovery to treating it as a core operational dependency.
NVIDIA's own survey data on AI in healthcare and life sciences — covering respondents across pharma, biotech, and medical devices — found that 46% of pharmaceutical and biotech companies identified drug discovery as among their top return-on-investment use cases for AI, and that 85% planned to increase their AI budgets this year. The 46% figure is notable because drug discovery is also one of the most technically demanding applications, with long feedback loops and high failure rates. The fact that it ranks this highly as an ROI use case suggests that the gains are legible enough to justify continued investment even in areas where the timelines to outcome are measured in years, not quarters.
The cautionary note is that NVIDIA's infrastructure position in drug discovery creates the same concentration risk it creates in AI training: if BioNeMo becomes the standard substrate for computational biology, then NVIDIA captures a large share of the value from whatever the biology labs produce. That dynamic has worked out reasonably well for AI researchers — the hardware availability has expanded enormously even as NVIDIA's margins have remained high. Whether the same dynamic is healthy for pharmaceutical R&D, where the public has a stronger interest in the outputs and their pricing, is a question that will take years to surface clearly. For now, the adoption curve is steep and accelerating, and the infrastructure is being built before the governance frameworks for it are.