Google's Robots Have Learned to Think Before They Act
Google DeepMind's Gemini Robotics 1.5 pairs a physical action model with an embodied reasoning layer that plans and narrates each step before the robot moves. The "think before act" design makes robot decisions transparent, but raises sharper questions about accountability when an articulate robot still gets it wrong.
Read full story →NVIDIA Bets on Open Collaboration to Challenge the Frontier Labs
The Nemotron 3 family and a new coalition of AI labs including Mistral, Perplexity, and Cursor represent NVIDIA's strategy for the model era: use hardware dominance and shared open weights to build an ecosystem that routes around OpenAI and Anthropic. The irony is that the "open" ecosystem runs straight back to NVIDIA hardware.
Read full story →When Robots Can Explain Themselves, the Displacement Gets Harder to Deny
Gemini Robotics 1.5 can narrate its reasoning before it acts. This is being framed as a safety and transparency win. But a robot that explains why it is doing your job is still doing your job. Physical AI entering the real world is the moment when the "we'll retrain workers" argument finally collapses under its own weight.
Read opinion →