Google DeepMind released Deep Research Max on April 21, a new autonomous research agent built on Gemini 3.1 Pro that can run exhaustive, multi-source investigations and deliver fully cited analyses from a single API call. A faster sibling, Deep Research, ships alongside it for interactive use cases. Both are available in public preview through paid tiers of the Gemini API.
Two Agents, Two Speed Profiles
The split is deliberate. Deep Research replaces Google’s December preview release with lower latency and lower cost at higher quality, built for real-time chat interfaces where users expect immediate results. Deep Research Max goes in the opposite direction: it uses extended test-time compute to iteratively reason, search, and refine its output, according to Google’s announcement. Google positions Max for asynchronous background workflows, citing the example of a nightly cron job that generates exhaustive due diligence reports for analyst teams by morning.
Product Manager Lukas Haas and Program Manager Srinivas Tadepalli of Google DeepMind describe the upgrade as a shift from “sophisticated summarization engine” to “a foundation for enterprise workflows across finance, life sciences, market research, and more.”
MCP Turns It Into a General-Purpose Agent
The most significant technical addition is Model Context Protocol (MCP) support. Developers can now connect Deep Research to proprietary data sources, financial feeds, and specialized databases through arbitrary tool definitions. This transforms the agent from a web searcher into one that can navigate any data repository, per Google.
Google is collaborating with FactSet, S&P Global, and PitchBook on MCP server designs to let shared customers plug financial data directly into Deep Research workflows. The agent also now generates native charts and infographics inline using HTML or Google’s Nano Banana format, a first for Deep Research in the Gemini API.
Other additions: collaborative planning (review and tweak the research plan before execution), multimodal input from PDFs, CSVs, images, audio, and video, real-time streaming of intermediate reasoning steps, and the option to shut off web access entirely to restrict queries to proprietary data only.
Benchmark Claims Come With Caveats
Google’s internal benchmarks show Deep Research Max outperforming its December predecessor by a wide margin on retrieval and reasoning tasks. The agent consults more sources and catches nuances the older version missed.
The comparison with competitors is less clean. The Decoder notes that Google benchmarked against OpenAI’s GPT-5.4 and Anthropic’s Opus 4.6, but GPT-5.4 is a general search model, not OpenAI’s dedicated deep research agent (which runs on GPT-5.2). OpenAI’s strongest search model, GPT-5.4 Pro, was left out of the comparison. Anthropic also reports higher BrowseComp scores for Opus 4.6 than Google’s benchmarks show, a discrepancy likely rooted in testing methodology (raw API vs. wrapped tooling).
The Infrastructure Behind It
Both agents run on the same infrastructure powering research features in Google’s consumer products: the Gemini app, NotebookLM, Google Search, and Google Finance. Enterprise and startup access through Google Cloud is coming, though Google has not announced specific dates or pricing for cloud-tier availability.
What Changes for Enterprise Research Workflows
The FactSet, S&P Global, and PitchBook collaborations signal where Google sees the commercial opportunity: financial services teams that currently spend hours gathering context from gated data sources. A single API call that blends proprietary feeds with web research and produces a cited, chart-embedded report is a direct substitute for junior analyst hours. The question is whether MCP integration proves robust enough in regulated environments where data provenance and audit trails matter more than speed.