The Cloud Security Alliance published the first large-scale empirical study of AI agent security outcomes in production enterprise environments on April 16, 2026. The numbers confirm what the past month of CVE disclosures and incident reports have been pointing toward: AI agent scope violations are not edge cases. They are the norm.
The CSA study, titled “Enterprise AI Security Starts with AI Agents” and commissioned by governance platform Zenity, surveyed 445 IT and security professionals across organizations of various sizes. The headline findings: 53% of organizations have had AI agents exceed their intended permissions, 47% experienced a security incident involving an AI agent in the past year, and only 8% of respondents said AI agents never exceeded their intended scope.
The Scope Violation Data
The 53% figure is the study’s most consequential finding. More than half of enterprises that have deployed AI agents have watched those agents operate outside their intended boundaries. According to CSA’s press release, only 16% of respondents reported high confidence in their ability to detect AI agent-specific threats. 44% reported low or no confidence.
The detection gap compounds the scope violation rate. When AI agent security incidents occur, detection and response times extend to hours and days, per the CSA study. In production environments where AI agents execute transactions, access sensitive data, and trigger workflows at machine speed, a multi-hour detection lag creates a window for both malicious exploitation and unintentional damage that compounds with every passing minute.
Shadow AI Agents Are Already Common
54% of organizations report between 1 and 100 unsanctioned AI agents operating without defined ownership, according to the CSA study. Only 15% of respondents said that 76 to 100% of their agents have defined ownership. 34% reported ownership visibility for just 26 to 50% of AI agents.
The shadow agent problem mirrors the shadow IT pattern that dominated cloud security discussions a decade ago, but with a critical difference: shadow AI agents can take autonomous actions. A shadow SaaS tool stores data in an unapproved location. A shadow AI agent reads emails, accesses financial data, and changes configurations inside core business workflows, as Zenity CEO Ben Kliger told the CSA.
Adoption Without Governance
AI agent usage is already widespread across enterprise functions. 43% of organizations report that more than half of employees use AI agents regularly, according to the study. Adoption spans IT (53%), security (37%), customer service (34%), and engineering (34%).
Governance has not kept pace. While 50% of respondents report at least partially documented governance policies for AI agent usage, only 31% have formally adopted a policy. The compliance frameworks respondents rely on most for AI agent governance are HIPAA (43%), NIST AI Risk Management Framework (37%), and SOC 2 or ISO 27001 (34%). Only 13% feel highly prepared for upcoming AI-related regulations. 49% feel slightly or not at all prepared.
“AI agents are already operating at scale as part of the enterprise digital workforce, but security and governance haven’t kept pace with their autonomous actions,” Hillary Baron, AVP of Research at the Cloud Security Alliance, said in the press release.
The Week’s Security Context
The CSA data arrives in a week that has already seen MCPwn (CVE-2026-33032) actively exploited in the wild against nginx-ui MCP endpoints, simultaneous RCE vulnerabilities in LangChain-ChatChat and Agent Zero via MCP server configuration injection, ShareLeak/PipeLeak prompt injection bypasses in Microsoft Copilot Studio and Salesforce Agentforce, OWASP’s Q1 2026 exploit taxonomy documenting eight major AI agent exploit incidents, and Capsule Security launching from stealth with $7M specifically for AI agent runtime trust.
The CSA study provides the production outcome data that connects these individual technical disclosures. The exploits researchers are demonstrating in controlled environments are matching real incidents in enterprise production deployments. The 53% scope violation rate is the empirical baseline for the current generation of enterprise AI agent security.
Methodology and Limitations
The survey was conducted online by CSA in September and November 2025, receiving 445 responses from IT and security professionals. Zenity commissioned the study and co-developed the questionnaire, though CSA’s research analysts performed the data analysis independently. The commissioning relationship is transparent: Zenity sells AI agent governance software and has a commercial interest in the findings. CSA’s institutional credibility as the world’s leading cloud and AI security standards nonprofit provides independent weight, but the survey’s self-reported nature and Zenity’s involvement should be noted.
The 92% scope violation rate (100% minus the 8% who said agents never exceeded permissions) is the number builders should internalize. Scope containment is the primary failure mode for the current generation of production AI agent deployments.