Google DeepMind released Gemini Robotics-ER 1.6 on April 14, a major upgrade to its reasoning-first robotics model that introduces industrial instrument reading, improved multi-camera spatial awareness, and more reliable task completion verification. Boston Dynamics is integrating the model into its Spot robot’s inspection platform, making it immediately available for industrial facilities worldwide.

What the Model Does

Gemini Robotics-ER 1.6 is the “strategist” layer in DeepMind’s dual-model robotics architecture. It doesn’t directly control robot limbs. Instead, it handles high-level spatial reasoning, task planning, and success detection, then passes decisions to vision-language-action models that execute physical movements. According to DeepMind’s announcement, the model can natively call tools including Google Search, VLAs, and third-party user-defined functions.

The headline new capability: instrument reading. Industrial facilities are filled with pressure gauges, thermometers, sight glasses, and digital readouts that require constant monitoring. Previously, robots could detect objects but couldn’t interpret what a gauge needle pointed to or estimate how much liquid filled a sight glass. Gemini Robotics-ER 1.6 reads these instruments by combining spatial reasoning with world knowledge, accounting for camera distortion, multiple needle positions across decimal places, and unit labels on gauge faces.

DeepMind developed this capability through direct collaboration with Boston Dynamics, whose Spot robot already performs inspection routes through factories, refineries, and infrastructure sites. As MarkTechPost noted, this is “a genuinely new capability” that did not exist in any prior version of the model.

Multi-View Reasoning and Success Detection

The second significant upgrade is multi-camera spatial reasoning. Most industrial robots run multiple camera feeds simultaneously: overhead views, wrist-mounted cameras, and fixed facility cameras. Gemini Robotics-ER 1.6 fuses information across these streams to understand spatial relationships, even in occluded or dynamically changing environments.

This feeds directly into success detection, the model’s ability to determine whether a task is actually complete. In DeepMind’s internal benchmarks, the model shows “significant improvement” over both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash on spatial reasoning tasks including precision object detection, counting, relational logic, and constraint compliance. In one test, the model correctly identified tool counts and refused to point at objects not present in the scene, while its predecessor hallucinated a wheelbarrow.

Boston Dynamics Integration

Boston Dynamics is shipping the integration through its Orbit software platform, specifically its AI Visual Inspection (AIVI) and AIVI-Learning systems. According to Robotics & Automation News, Spot can now perform equipment monitoring (gauges, conveyor systems), safety and compliance checks including 5S audits, hazard detection for leaks and debris, and materials and inventory tracking.

“Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously,” said Marco da Silva, vice president and general manager of Spot at Boston Dynamics, in comments included in DeepMind’s blog post.

Boston Dynamics noted the system provides “transparent reasoning,” allowing facility operators to see how the AI reaches its conclusions. The AIVI-Learning system requires data sharing with Boston Dynamics to improve performance for individual facilities, with the company stating customer data is shared only with Boston Dynamics.

The Model Is Already Available

Gemini Robotics-ER 1.6 is available now through the Gemini API and Google AI Studio. DeepMind has published a developer Colab notebook with configuration examples and prompting guides for embodied reasoning tasks.

The Convergence Point

The same perception-reasoning-action loop that powers software agents in tools like OpenClaw and LangChain now operates a robot that reads a pressure gauge and reports anomalies to an operations center. Gemini Robotics-ER 1.6 isn’t a research demo: it’s shipping inside one of the most widely deployed industrial inspection robots in the world, available through the same API ecosystem that developers use for text and code generation. The physical agent layer and the software agent stack are no longer separate roadmaps.