Something is happening in robotics that looks less like a product launch cycle and more like an infrastructure land grab. A partnership announced last week between Qualcomm and German humanoid startup Neura Robotics offers the clearest illustration yet: the companies that controlled AI for the cloud and for mobile are now racing to own the foundational stack for AI in the physical world — and the window for establishing those positions may be closing faster than most observers expected.
Qualcomm's deal with Neura is framed around building the "brain and nervous system" of next-generation robots. That's deliberately evocative of what Qualcomm did in mobile: it didn't build the phones, it built the Snapdragon platform that phone makers built on top of, and it captured durable economics at the infrastructure layer as a result. The company is explicitly betting the same pattern plays out in humanoid and general-purpose robotics — that robot makers will converge on a common silicon and software foundation rather than building custom stacks for every application.
NVIDIA is making a parallel bet from a different angle. Where Qualcomm is focused on the compute substrate that robots run on in deployment, NVIDIA's push is about the development and training stack. Its Cosmos family of models — released in updated form alongside GTC this month — provides the simulation and world-modelling layer that lets robot makers train their systems without accumulating years of real-world data. GR00T N1.6, NVIDIA's open reasoning model for humanoid full-body control, is explicitly designed to be the foundation that third-party robot developers build on rather than a product NVIDIA sells directly. It's an Android analogy the company is happy to invite.
The pattern across these moves is consistent: large platform companies are offering their technology to the robotics ecosystem at low or no cost, betting that controlling the standard is worth more than any single product margin. Boston Dynamics, Caterpillar, Franka Robotics, LG, and NEURA have all debuted robots built on NVIDIA's physical AI platform in the past quarter. That's not a coincidence — it's what a platform land grab in its early stages looks like.
What makes the current moment distinctive is the convergence of enabling factors that weren't present even two years ago. The foundation models that allow robots to understand natural language instructions and generalize across tasks are now good enough to be useful. The simulation environments are rich enough that training in synthetic worlds transfers meaningfully to real ones. The hardware costs, while still significant, have declined to the point where unit economics for industrial and logistics applications are starting to pencil out.
Boston Dynamics' CEO Robert Playter has been notably specific about where robots will actually be deployed first: not domestic help, not care work, but mining, construction, and logistics — environments where the work is physically demanding, the variability is manageable, and the labour cost differential justifies the investment. That's a useful corrective to the CES-inflated hype that surrounded humanoid robots at the start of the year. The question is less "can robots do this?" and increasingly "which company's stack will they be running on when they do?"
The stakes of getting that question right are substantial. Mobile computing generated decades of durable platform economics for Qualcomm, ARM, and Google. If physical AI follows a similar trajectory — and the early structural signals suggest it might — then the infrastructure battles being settled now between Qualcomm, NVIDIA, and a handful of others will shape the industry for a long time. The companies building the actual robots may ultimately matter less than the companies that built what the robots run on.