Gartner projects Fortune 500 companies will deploy an average of 150,000 AI agents per organization by 2028, up from roughly 15 per company in 2025. The estimate, drawn from multiple surveys and published in a press release on April 28, represents a 10,000x scaling trajectory over three years.
Where the Growth Comes From
Max Goss, senior director analyst at Gartner, told Computerworld that the next wave of agents will move beyond text summarization into automating spreadsheets, word documents, and multi-step workflows. Google Workspace and Microsoft 365 already embed AI interfaces with automated workflows. The agents coming next will handle delegated work, not just respond to prompts.
“We’ve seen a sort of new appreciation in the industry of what agent AI can do,” Goss told Computerworld.
Customer service and data analytics are the domains where Gartner sees the highest confidence in agent value today. Regulated verticals like finance and healthcare will move more slowly, requiring guardrails to reduce hallucinations and errors before scaling.
The Reliability Problem
At 150,000 agents per company, uptime expectations mirror server infrastructure: 100%. Goss flagged that companies will need to spread agents across multiple models and hardware resources to ensure reliability, according to Computerworld. Heavy usage has already caused providers like Anthropic and OpenAI to throttle or shut down LLM access, undermining enterprise reliability.
Fully autonomous agents remain unlikely within two years. Goss expects semi-autonomous agents handling multi-step processes in specific domains, with humans still in the loop for security and governance decisions.
Shadow AI and Sprawl Governance
Goss warned IT leaders against blanket agent bans. “If they just block all agents, then employees are going to probably go around your controls. They might use unsanctioned tools otherwise known as shadow AI and I think that’s a greater risk,” he told Computerworld.
The alternative: proactive sanctioning with visibility controls. Without governance, poor management creates gaps that break processes or open security vulnerabilities. Goss also cautioned against layering agents on top of legacy processes. “Process design and agentic AI go hand in hand,” he said.
The Skeptic’s Counterpoint
The 150,000 figure sits at the aggressive end of industry projections. Other research, including a widely cited MIT report, has pegged generative AI pilot failure rates as high as 95%. Companies like EY and Lumen have demonstrated successful deployments, but those remain concentrated in knowledge work and customer service, according to Computerworld.
Goss acknowledged that some agent deployments will fail, even with safeguards. “That is kind of okay, because actually we need to understand where these tools can help us and where they can’t,” he told Computerworld.
The Infrastructure Bet
Whether 150,000 agents per company becomes reality depends on three unsolved problems: multi-model redundancy to survive provider outages, governance frameworks that can track agent sprawl without choking adoption, and process redesign that treats agents as first-class participants rather than bolted-on automation. The companies solving those problems in 2026 will determine whether Gartner’s 2028 forecast looks prescient or aspirational.