National Robotics Week is a low-key annual event that rarely generates headlines beyond trade press. This year, NVIDIA used it as the backdrop for something considerably more ambitious: the simultaneous release of a new generation of open physical AI models, a physics simulation engine, and a collaboration with Hugging Face that formally binds two of the largest developer communities in AI together around a common robotics stack.
The centrepiece announcements are Cosmos Reason 2 and Isaac GR00T N1.6. Cosmos Reason 2 is a multimodal reasoning model specifically trained for robots and autonomous systems: its job is to help machines understand what they are looking at and predict what will happen next in the physical world, not just label objects but reason about physics, occlusion, and causality. GR00T N1.6, meanwhile, is a vision-language-action model aimed at humanoid robots, enabling full-body control coordinated by a reasoning layer. Both are being released as open models.
The open release is the strategically interesting part. NVIDIA is not primarily a software company; it sells GPUs and the compute infrastructure to train and run these models. Releasing the models openly is less an act of generosity than it is a platform move: every developer who builds on GR00T N1.6 or Cosmos Reason 2 needs hardware to run them, and that hardware overwhelmingly comes from Santa Clara. The more robotics developers adopt the NVIDIA stack, the wider the moat gets.
The Hugging Face collaboration deepens this logic. NVIDIA has roughly 2 million developers in its robotics ecosystem; Hugging Face has 13 million AI builders across all domains. Integrating Isaac and GR00T into the LeRobot open-source framework puts NVIDIA's tools directly in front of a community that, until recently, was largely working on language and vision rather than physical robotics. That community crossover is exactly what NVIDIA needs to grow the application layer on top of its hardware.
Alongside the models, NVIDIA released Newton 1.0, an open-source physics engine built for dexterous robot manipulation. Physics simulation has been a persistent bottleneck in robotics training: the gap between how a robot behaves in simulation and how it behaves in the real world (the "sim-to-real gap") remains one of the hardest unsolved problems in the field. Newton is not a complete solution, but it represents a shared, open foundation for the community to iterate on, rather than every lab and company maintaining its own proprietary physics stack.
There is also OSMO, a new edge-to-cloud compute framework intended to simplify the orchestration of robot training across distributed hardware. The practical problem OSMO addresses is mundane but real: training a physical AI model typically involves moving data between simulation environments, cloud training clusters, and edge devices running on hardware like Jetson Thor. Getting those pieces to talk to each other reliably is currently manual, error-prone, and expensive in engineering time. OSMO tries to make that plumbing invisible.
What is worth watching is whether the open models strategy actually accelerates the field or primarily benefits NVIDIA. The history of open AI releases from major labs is mixed. Some, like Meta's LLaMA series, genuinely democratised capability and spawned ecosystems that no single company controls. Others became effectively proprietary through the weight of proprietary tooling, data pipelines, and compute requirements built around them. NVIDIA's version sits closer to the second pattern by design: the models are open, but the full-performance path runs through Jetson Thor and the NVIDIA cloud.
That caveat aside, the scale of what was released this week is real. Robots that can reason about what they see and act on that reasoning with full-body coordination are no longer a research programme. They are, as of this week, an open platform with a developer community attached.