Sundar Pichai described a future where Google Search stops being an answer engine and starts being a task coordinator. Speaking on the Cheeky Pint podcast with Stripe co-founder John Collison and investor Elad Gil, the Alphabet CEO laid out a vision where search queries trigger multi-step agent workflows instead of returning links.

“If I fast-forward, a lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running,” Pichai said, according to Search Engine Land.

He went further, describing Search as an orchestration layer for autonomous agents: “Search would be an agent manager in which you’re doing a lot of things. I think in some ways, I use Antigravity today, and you have a bunch of agents doing stuff. I can see search doing versions of those things, and you’re getting a bunch of stuff done.”

Already Happening in AI Mode

Pichai pointed to behavior changes already visible in Google’s AI Mode, the AI-powered search experience that launched in 2025. “Today in AI Mode in Search, people do deep research queries,” he said, per Search Engine Land. “People adapted to that. I think people will do long-running tasks.”

The distinction between Search and Gemini remains deliberate. Pichai said Google is running both products in parallel and expects them to “profoundly diverge in certain ways,” echoing comments from Google’s VP of Search, Liz Reid, last month.

The Infrastructure Behind the Vision

The agentic Search trajectory runs on the same infrastructure Google is spending $175 to $185 billion on in 2026 capital expenditure. Pichai confirmed the range in the interview, describing the company as “supply-constrained” across AI surface areas, according to PPC Land.

Google’s Gemini Flash models, which Pichai said operate at “90% the capability of the pro models” while running faster and cheaper, are the likely execution layer for agentic Search. The company’s vertical integration between custom TPUs (now in their seventh generation) and model development gives it a cost advantage in running high-volume agent workflows at search scale.

On latency, Pichai described an internal system where Search sub-teams operate on millisecond-level budgets: 10 to 30 milliseconds depending on feature type. Nick Fox, Google’s SVP of Knowledge & Information, posted on LinkedIn that Google has reduced Search latency by over 35% in five years while adding more AI capabilities.

The Interface War Sharpens

Pichai’s comments place Google squarely in competition with Microsoft for the “first interface to agents” position. Microsoft launched Agent 365 this week as an enterprise governance platform, and its Copilot product already occupies the agent-interface role across Windows and Office. Google is betting that search, where billions of users already start their workflows, is the natural surface for agent orchestration.

The distinction matters for builders: Microsoft’s agent play is enterprise-first (governance, compliance, lifecycle management), while Google’s is consumer-first (turn search queries into completed tasks). Both are compressing the timeline for agent adoption across different user bases.