← Front Page
AI Daily
A robot arm reading a pressure gauge in an industrial facility
Robotics • April 20, 2026

DeepMind's Gemini Robotics-ER 1.6 Can Now Read Gauges and Navigate the Real World

By AI Daily Editorial • April 20, 2026

Google DeepMind has released Gemini Robotics-ER 1.6, an upgraded embodied reasoning model that represents one of the most practically focused robotics AI releases in recent memory. The model does not make sweeping claims about general intelligence. What it does instead is tell you what percentage of pressure a gauge is reading, count the objects on a factory floor, and figure out whether a robot actually completed the task it was asked to do. That specificity is what makes this release interesting.

The headline capability is instrument reading. Gemini Robotics-ER 1.6 can look at complex industrial gauges and sight glasses, use a technique called agentic vision, zoom into images programmatically, execute code to estimate proportions, and return accurate readings. This was developed in close collaboration with Boston Dynamics, whose robots operate in exactly the environments where gauges matter: warehouses, industrial plants, inspection sites. It is a narrow capability, but it is one that turns a robot from a platform that moves things into a platform that monitors and reports. The difference in commercial value is substantial.

Technically, the model sits in a two-layer architecture. Gemini Robotics-ER 1.6 acts as the high-level brain, handling planning, spatial reasoning, and task verification. It passes natural-language instructions down to Gemini Robotics 1.5, which handles the motor commands. The division of labour mirrors how humans work: conscious planning at a high level, trained reflexes at the execution level. Whether the analogy holds under pressure is the question roboticists will be testing over the next few months.

DeepMind is also emphasising safety improvements. The company says this is the safest robotics model they have released, measured by compliance with safety policies on adversarial spatial reasoning tasks. This matters for physical AI in a way it does not for conversational AI: a language model that gives a bad answer costs a correction, while a robot that makes a bad decision in a factory costs an injury or a breakdown. The safety improvements are genuine, but the real test will come from commercial deployment, not controlled benchmarks.

The availability model reflects where physical AI actually is right now. Gemini Robotics-ER 1.6 is available today through the Gemini API and Google AI Studio for developers. The lower-level Gemini Robotics 1.5, which translates plans into motor commands, remains restricted to select partners. DeepMind is clearly prioritising the reasoning layer, where general software developers can build applications, over the execution layer, where integration with specific robot hardware is required. That is a reasonable staging decision, but it also means most developers will be building planning and monitoring applications rather than full robotic autonomy for now.

The broader context here is a quiet but real competition in physical AI between Google DeepMind and NVIDIA. NVIDIA used National Robotics Week earlier this month to release the Newton 1.0 physics engine, the Isaac GR00T model family, and open blueprints for its physical AI stack. Google is pushing the reasoning layer. NVIDIA is pushing the simulation and training layer. Both are giving away software while betting on hardware and compute to generate returns. The race to be the platform for physical AI is well underway, and the companies that define what robots think and how they are trained will have extraordinary leverage over the robotics industry that follows.

For now, Gemini Robotics-ER 1.6 is a serious incremental release that solves real problems. Instrument reading alone opens up inspection and monitoring use cases that previously required expensive specialised hardware. Success detection makes it possible to close the loop on robotic tasks without a human watching every step. These are not science fiction milestones. They are the unglamorous capabilities that turn demonstrations into deployments.

Sources