Addy Osmani, a director of engineering at Google Cloud AI who works on Gemini, published a framework on April 14 that names and defines a discipline most web teams have not started thinking about: Agentic Engine Optimization.
AEO, as Osmani frames it in his original post, is “the practice of structuring, formatting, and serving technical content so that AI coding agents can actually use it, not just human readers.” The analogy is SEO, but for a different consumer: autonomous software that fetches, parses, and reasons over content without ever rendering a page or clicking a link.
The Behavioral Gap
The framework rests on research that quantifies how differently agents consume content. Osmani cites a recent paper studying HTTP traffic from nine major AI coding agents, including Claude Code, Cursor, Cline, Aider, and VS Code. The findings: agents compress multi-page human browsing into one or two HTTP requests. Scroll depth is zero. Time-on-page is 400 milliseconds. No link clicks, no tutorial completions, no UI interactions.
Every engagement metric web teams have optimized for over the past decade becomes invisible when the reader is an agent. The agent visits, reads the raw content, and leaves. Your analytics recorded nothing useful, but the agent absolutely consumed your documentation.
Five Optimization Dimensions
Osmani defines five areas where content either works for agents or fails silently:
Discoverability. Can agents find your content without rendering JavaScript? If documentation requires a single-page app to display, agents that fetch raw HTML see nothing.
Parsability. Osmani recommends serving clean Markdown alongside traditional HTML pages. Markdown eliminates the noise from navigation, scripts, and layout that agents must otherwise strip. He suggests making .md versions directly accessible and discoverable.
Token efficiency. This is the constraint most teams underestimate. Agents have practical context windows of 100K to 200K tokens. Large documentation pages can exceed those limits, causing truncation, skipped content, or hallucinated outputs. Osmani’s recommendation: put answers in the first 500 tokens, keep pages compact, eliminate preambles.
Capability signaling. Documentation should tell agents what an API does, not just how to call it. Osmani points to emerging patterns: llms.txt as a structured index, skill.md files to define capabilities, AGENTS.md as a machine-readable entry point.
Access control. A misconfigured robots.txt can block AI agent traffic entirely. If your content is not accessible to the User-Agent strings these agents use, optimization is irrelevant.
The Analytics Blind Spot
The framework’s most actionable section identifies how to detect agent traffic using server-side logs. Osmani catalogs the HTTP signatures of major coding agents: Claude Code uses Node.js with an axios/1.8.4 User-Agent; Cursor uses Node.js with got; Cline and Junie use curl/8.4.0; Windsurf uses Go’s colly library. Each leaves a distinct fingerprint in server logs even though client-side analytics see nothing.
For teams running developer documentation, API references, or product guides, this means agent traffic may already constitute a significant share of actual consumption. Without server-side segmentation, that traffic is invisible.
The Institutional Weight
The framework carries unusual weight because of who published it. Osmani works on Gemini at Google Cloud AI. As Search Engine Land noted, Google itself has not published an official stance on AEO. Google’s John Mueller has recommended against separate Markdown pages for LLMs, and Google does not use llms.txt. Osmani’s framework represents his professional analysis, not Google policy.
The search community noticed the gap. Search Engine Roundtable captured the reaction: “At some point, Google needs to come out with an official stance.” The market is already responding to AEO signals because the behavioral evidence is visible in server logs, regardless of whether Google endorses the optimization patterns.
The Practical Question
For anyone building products, publishing documentation, or running a content operation: AEO is not a future consideration. Agents are already fetching and acting on web content at scale. Claude Cowork scheduled tasks pull from web sources. OpenClaw workflows scrape documentation as context. Cloudflare’s Agent Lee queries internal and external resources via natural language.
The teams that optimize for agent-readable content first will have a structural advantage as agentic interfaces displace traditional search-and-click workflows. The teams that discover their documentation has been invisible to agents for months will be playing catch-up.