Cisco’s State of AI Security 2026 report, analyzed by Spiceworks on March 16, puts hard numbers on a problem that’s been building for months: enterprises are deploying AI agents far faster than they can secure them.

The headline figures: 83% of businesses surveyed plan to deploy agentic AI capabilities. Only 29% feel prepared to secure those deployments. That 54-point gap represents thousands of organizations running autonomous AI agents in production environments without adequate security controls, monitoring, or governance frameworks.

What “Not Ready” Looks Like

The 29% readiness figure covers multiple dimensions of AI agent security: identity and access management for agent credentials, monitoring of agent actions across systems, data loss prevention for agent-accessible information, and incident response procedures when agents behave unexpectedly.

Most enterprises deploying AI agents are inheriting the same security model they use for SaaS applications — API key management, role-based access control, network segmentation. Agentic AI breaks these assumptions. An AI agent with shell access and API credentials doesn’t fit neatly into traditional IAM categories. It’s not a user, not a service account, and not an application. It’s an autonomous actor that makes decisions, and existing security tooling wasn’t built to monitor autonomous decision-making.

The Attack Surface Expansion

Spiceworks titled its analysis “When AI Agents Become Your Newest Attack Surface,” and the framing is precise. Every AI agent deployment creates new attack vectors that don’t exist in traditional software:

Prompt injection: Agents that process external input (emails, documents, web content) can be manipulated through crafted prompts embedded in that content. An agent summarizing customer emails can be redirected to exfiltrate data if a malicious email contains injection instructions.

Credential exposure: AI agents need API keys, OAuth tokens, and service credentials to interact with enterprise systems. Each credential stored in an agent’s configuration is a target — as CVE-2026-21852 demonstrated with Claude Code’s API key exfiltration vulnerability.

Lateral movement: Agents with broad system access can be compromised once and used to pivot across the enterprise. Traditional lateral movement requires an attacker to navigate network segmentation; an agent with cross-system API access bypasses that segmentation by design.

Shadow deployments: Individual employees deploying AI agents without IT approval — the “shadow AI” problem — means security teams may not even know what agents are running, with what credentials, accessing what data.

The Governance Gap

Cisco’s numbers arrive alongside concrete evidence of the risks. In the same week: a formal CVE on Claude Code for credential exfiltration, a Just Security analysis revealing that Chinese state actors used jailbroken Claude Code for a 30-target cyberattack campaign, and OpenAI and Anthropic both launching AI security agents to address vulnerability scanning.

The gap between adoption (83%) and readiness (29%) suggests a correction is coming — either through improved security tooling or through high-profile breaches that force a slowdown. NVIDIA’s NemoClaw platform, launched at GTC 2026, is explicitly targeting this gap with enterprise-grade security orchestration for OpenClaw-native agents. Galileo’s Agent Control, open-sourced on March 11, takes a different approach with a centralized control plane for runtime guardrails.

The tools are arriving. Whether they arrive before the breaches do is the open question Cisco’s report leaves unanswered.