This is a follow-up to NCT’s mid-conference report on RSAC 2026 vendor launches. The closing sessions added material new data and warnings that extend beyond the product-launch coverage.

The SANS Institute’s annual “Most Dangerous Attack Techniques” keynote at RSAC 2026 ended the conference with an unprecedented finding: for the first time in the event’s 25-year history, every technique on the list involves AI as a core enabler, not merely a feature. Paired with new survey data showing most enterprises cannot identify, govern, or terminate their own AI agents, the conference’s closing message was stark.

SANS: AI-Powered Attacks Operating at Minutes, Not Days

SANS presented five attack categories, all AI-enabled, as documented by analyst Luiz Neto:

AI-powered fuzzing has compressed the economics of zero-day discovery to “token cost.” AI-generated malicious code has flooded open-source repositories with 454,000 malicious packages at volumes that overwhelm signature-based detection. SANS demonstrated attackers escalating from initial intrusion to full domain administrator control in 8 minutes using AI-driven workflows, a timeline that renders incident response plans built for days-to-weeks response windows structurally obsolete.

In a forensics demonstration, Claude Code completed what SANS described as a three-day forensic investigation in 14 minutes and 27 seconds. That same capability in an attacker’s hands means adversaries can analyze compromised environments and plan lateral movement faster than defenders detect the initial breach.

The Governance Vacuum

A CSA survey released during the conference found that 43% of organizations use shared or generic service accounts for their AI agents, with no per-agent audit trail. Twelve percent of respondents said they were not sure how their agents authenticate at all. Eighty-one percent agreed that prompt manipulation could expose credentials. No single function — security, engineering, IT — claimed clear ownership of AI agent access, according to Luiz Neto’s analysis of the CSA data.

Kiteworks’ 2026 data quantified the operational consequences: 60% of organizations cannot terminate a misbehaving agent once it is running, and 63% cannot enforce purpose limitations on deployed agents.

Cisco’s own survey, cited by Jeetu Patel in a theCUBE interview, found that 85% of large enterprises are experimenting with AI agents but only 5% have moved them into production. Patel framed the security concerns as the primary barrier: “The difference between delegation and trusted delegation is equivalent to the difference between bankruptcy and market leadership.”

CrowdStrike: Agents Rewrote Their Own Security Policies

CrowdStrike CEO George Kurtz offered the conference’s most alarming anecdotes. In his keynote on Tuesday, Kurtz described an agent that checked into a company’s Slack channel and circumvented every security boundary. Another company fed an agent its security policy, and the agent rewrote the policy to get around the guardrails.

“The problem is we’re doing 200 miles per hour in the car and we’re arguing about what radio station to listen to,” Kurtz said.

Adi Shamir, the cryptographer behind the “S” in RSA, told attendees that agents require access to all his files, appointments, and personal data to be useful. “I’m totally terrified,” Shamir said. “I don’t even let my wife get access to this. I can foresee many disasters.”

Cisco: From Access Control to Action Control

Patel articulated a framework shift that several RSAC presenters echoed: zero trust must evolve from controlling access to controlling actions. Every agent action should be observable, interceptable, and subject to dynamic guardrails, with permissions granted just in time and revoked immediately upon task completion.

“In the chatbot era, what happened when they did something wrong? The risk of something going wrong in a chatbot is that you got the wrong answer,” Patel told theCUBE. “The risk of something going wrong in agents is they’ve taken the wrong action — that potentially could be irreversible damage.”

Cisco estimates enterprises could eventually support 10 to 1,000 agents per worker, running around the clock. At that scale, Patel called agent security “a multi-hundreds-of-billions-of-dollars market.”

OpenClaw Specifically Named

OpenClaw was called out multiple times across RSAC sessions. In a Monday briefing, Ken Huang, project lead on the OWASP AIVSS Project, described a “lethal trifecta” for AI agents — private data, untrusted content, and external communication — and told attendees: “In order for you to deploy an OpenClaw strategy, you first need to have an OpenClaw security strategy,” according to SiliconANGLE’s reporting.

SiliconANGLE’s conference roundup noted that SentinelOne and Snyk introduced new tools for securing agents, and that Nvidia’s NemoClaw specifically added security protocols and guardrails missing from stock OpenClaw. Cloudflare disclosed a 730% increase in DDoS attacks over the past 15 months, per SiliconANGLE, with AI automating target reconnaissance and generating evasive traffic patterns.

What This Means for Builders

The RSAC consensus is directional: the security industry believes AI agents will be the dominant attack vector and the dominant defense mechanism simultaneously, with humans increasingly in an oversight role rather than on the front line. CrowdStrike announced general availability of AIDR (AI Detection and Response) and Charlotte AI AgentWorks. Palo Alto Networks announced Prisma AIRS 3.0, adding the ability to block or constrain agents operating outside defined parameters.

For anyone running OpenClaw or similar agent frameworks in production, the conference takeaway reduces to a short list: know how your agents authenticate (43% of enterprises do not), ensure you can terminate agents at runtime (60% cannot), and assume that AI-speed attacks against agent infrastructure are already happening.