Security firm Orchid Security published research showing that roughly half of enterprise identity activity now occurs outside the visibility of centralized identity and access management systems. The finding quantifies a governance gap that security teams have suspected but struggled to measure: AI agents authenticate, acquire permissions, and operate inside applications in ways that traditional IAM infrastructure was never built to observe.

Orchid calls the phenomenon “identity dark matter.” The term describes identity activity that happens at the application layer, below the line of sight of identity providers and IAM connectors. According to Orchid’s analysis, the problem is structural. Many identities reside in central directories, and controls exist in central IAM tools, but an equal number of identities and controls live inside the applications themselves. When AI agents authenticate locally rather than through a governed identity provider, their activity never appears in IAM logs.

Why AI Agents Compound the Problem

Traditional IAM was designed around human behavior patterns: login, session, logout. AI agents break that model. They run continuously, span multiple applications in a single workflow, acquire permissions dynamically as tasks require them, and generate activity at machine speed. None of these patterns map to the session-based telemetry that IAM platforms capture.

The concern is not theoretical. Ramin Farassat, chief product and strategy officer at Menlo Security, told SiliconANGLE that enterprises need to “desperately re-architect their environment because there’s going to be this big army of AI agents.” Farassat pointed to a specific vulnerability: agents lack the intuition to recognize social engineering or prompt injection attacks, meaning they execute malicious instructions without the gut-check that a human operator would apply.

Gartner projects that 40% of enterprise applications will embed task-specific AI agents by end of 2026, up from less than 5% in 2025, according to Strata. That growth rate explains why identity teams are falling behind: the number of autonomous actors inside the enterprise is scaling faster than the governance infrastructure designed to track them.

Analyst Validation

Two major analyst firms have responded to the gap. Gartner published its inaugural Market Guide for Guardian Agents in May 2026, codifying AI agent identity governance as a distinct enterprise security category. Separately, Forrester released the AEGIS framework, which identifies five categories of agentic AI risk: privilege, design and configuration, behavior, structural, and accountability. Forrester’s framework explicitly argues that legacy security models that validate whether an action is allowed are insufficient; in agentic environments, validating intent becomes equally essential.

The convergence of both firms publishing dedicated agent governance frameworks within the same month signals that the category has moved from theoretical concern to active market.

The Operational Gap

The practical challenge is granular. Strata’s research identifies four specific failure modes in current IAM architectures when applied to agents: privilege drift (agents accumulate permissions faster than human roles), shadow agents (teams deploy agents outside security governance), protocol bypass (agents route around governed access points), and broken delegation chains (downstream agents operate with permissions that can’t be traced to the originating human user).

Each of these failure modes maps to identity activity that Orchid’s research categorizes as dark matter: real, consequential, and invisible to the tools enterprises rely on for compliance and threat detection.

For security teams planning agent deployments, the 50% figure from Orchid’s research provides a concrete benchmark. It means that for every identity event IAM systems capture from an AI agent, another event of equal consequence goes unrecorded.