Glean released the Enterprise Agent Development Lifecycle (ADLC) on May 12, a seven-stage framework for building, governing, and measuring AI agents across organizations. The release includes eight new platform capabilities targeting the stages where most enterprises stall: context, governance, and measurement.

Born From Internal Agent Chaos

The framework emerged from Glean’s own operational problem. After releasing autonomous agents in December 2025, the company found itself managing thousands of agents built across teams with no consistent way to govern or measure them.

“The problem that we’re solving with the Agent Development Lifecycle is not just about building the agents,” product manager Selene Kim told The AI Economy. “How do we make sure that those agents that are built, can you get it to work in production? How do you measure whether it is successful? How do you know when to say it’s not working as expected?”

Seven Stages, Eight Capabilities

The ADLC defines seven stages: Opportunity, Design, Performance, Input, Develop, Launch, and Monitor and Improve. Glean calls it an “opinion piece” for the broader AI community, describing the framework as platform-agnostic and freely adoptable.

The concrete product launches mapped to the framework include:

Generally available now: Auto Mode Agent Builder, which generates agents from natural language descriptions capable of planning and executing across Glean’s enterprise graph. Debug and Trace Views for full observability into every agent run, cataloging inputs, tool calls, LLM decisions, and outputs. Sub-agents for discrete tasks coordinated by a parent agent. An expanded Agent Sandbox with secure file system and code execution in virtual private clouds. New Agent Library Controls with verification badges, featured agents, departmental categories, and soft-delete with admin restore.

In beta: Content and Scheduled Triggers for agents to react to content changes, scheduled runs, and external events. Agent Access Policies with organization-wide guardrails that can block or flag sensitive material.

Coming soon: A rebuilt Agent Insights Dashboard tracking adoption, top use cases, estimated hours saved, and feedback trends.

Context as the Bottleneck

Kim highlighted context quality, not model capability, as the primary failure point for enterprise agents. “An agent has enough context when it is able to reliably complete its goal using the information at its disposal,” she told No Jitter. Agents with too little context succeed in proof-of-concept and fail in production. Agents with too much context fail because the underlying LLM cannot separate signal from noise.

“Agents are just another type of software. In much the same way that traditional software development lifecycle practices apply across different programming languages, the ADLC is applicable across different models from different providers,” Kim told No Jitter.

The Agent Ops Race

Glean’s ADLC enters a crowded field. Microsoft’s Copilot Studio shipped enhanced agent governance tools in its April 2026 update. Anthropic offers Claude Managed Agents for hosted agent lifecycle management. SAP just unveiled 200+ agents at Sapphire 2026. The common thread: the industry is shifting from “can we build agents” to “can we operate thousands of them without losing control.” Glean is betting its enterprise search heritage, specifically its Enterprise Graph connecting siloed organizational data, gives it a structural advantage in the context layer that governs agent quality.