Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners found that 88% of enterprises experienced AI agent security incidents in the last twelve months. Only 21% have runtime visibility into what their agents are doing. 82% of executives believe their policies protect them from unauthorized agent actions.

The data, published as part of a VentureBeat analysis alongside a separate three-wave survey of 108 qualified enterprises, quantifies a structural gap: enterprises are deploying agents faster than their security architectures can govern them.

Three Stages, One Gap

VentureBeat’s survey maps enterprise AI agent security into three maturity stages. Stage one is observation: logging agent activity and feeding it to SIEMs. Stage two is enforcement: integrating IAM controls that turn observation into action. Stage three is isolation: sandboxed execution that limits blast radius when guardrails fail.

Most enterprises are stuck at stage one. VentureBeat’s data shows monitoring investment snapped back to 45% of security budgets in March after dropping to 24% in February, when early movers shifted dollars into runtime enforcement and sandboxing. The reversion suggests that even enterprises attempting to advance to stage two are retreating to familiar territory.

Auditability priority tells a parallel story. In January, 50% of respondents ranked it a top concern. By February, that dropped to 28% as teams sprinted to deploy agents. In March, it surged to 65% when those same teams realized they had no forensic trail for what their agents did.

The Scale of the Problem

The numbers compound across surveys. Arkose Labs’ 2026 Agentic AI Security Report found 97% of enterprise security leaders expect a material AI-agent-driven incident within 12 months. Only 6% of security budgets address the risk. CrowdStrike’s Falcon sensors detect more than 1,800 distinct AI applications across enterprise endpoints.

Gravitee’s survey found 45.6% of teams still use shared API keys for agent authentication, and 25.5% of deployed agents can create and task other agents, according to VentureBeat. A quarter of enterprises have agents that can spawn additional agents their security teams never provisioned.

In healthcare, the gap is worse: 92.7% of organizations reported AI agent security incidents versus the 88% all-industry average, per Gravitee’s detailed findings.

Why Guardrails Alone Fail

CrowdStrike CTO Elia Zaitsev framed the visibility problem at RSAC 2026: “It looks indistinguishable if an agent runs your web browser versus if you run your browser,” he told VentureBeat. Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction.

A 2025 paper by researchers at Stanford, ServiceNow Research, University of Toronto, and FAR AI showed a fine-tuning attack that bypasses model-level guardrails in 72% of attempts against Claude 3 Haiku and 57% against GPT-4o, as cited by VentureBeat. The attack received a $2,000 bug bounty from OpenAI and was acknowledged as a vulnerability by Anthropic.

Prevention of unauthorized actions ranked as the top capability priority in every wave of VentureBeat’s survey at 68% to 72%, the most stable signal in the dataset. The demand is for permissioning at the infrastructure level, not prompt-level guardrails.

The Regulatory Clock

HIPAA’s 2026 Tier 4 willful-neglect maximum is $2.19 million per violation category per year. FINRA’s 2026 Oversight Report recommends explicit human checkpoints before agents that can act or transact execute, along with narrow scope, granular permissions, and complete audit trails.

For security teams evaluating their agent exposure, VentureBeat’s three-stage model offers a concrete diagnostic: inject a canary token into a test document, route it through your agent, and check whether it leaves your network. If it does, stage one has already failed.