Nearly half of enterprises cannot see what their AI agents are doing. That is the headline finding from Salt Security’s 1H 2026 State of AI and API Security Report, released April 8-9, which surveyed more than 300 enterprise security leaders on how their organizations are handling the shift from human API consumption to agent-driven API consumption.

The Numbers

The report quantifies a visibility crisis that security teams have been describing anecdotally for months:

  • 48.9% of organizations are “entirely blind to machine-to-machine traffic,” meaning they cannot monitor what their autonomous agents do when calling internal or external APIs.
  • 48.3% cannot effectively distinguish legitimate AI agents from malicious bots.
  • 78.6% of security leaders report increased board-level and executive scrutiny of AI security risks.
  • 68.8% of boards are specifically concerned about sensitive data leakage through AI prompts or models.
  • 38.8% are worried about autonomous agents acting without human oversight.
  • Only 23.5% of respondents consider their existing security tools “very effective” at preventing agent-class attacks.

According to Security Boulevard’s coverage, the report also found that 47% of organizations have delayed a production release due to concerns about securing APIs exposed to autonomous systems, and nearly 47% of respondents reported API growth of 51-100% in the past year.

APIs as the “Agentic Action Layer”

The report introduces a specific framing: APIs are now the “Agentic Action Layer,” the operational backbone through which autonomous agents execute real-world actions. AI agents don’t browse the web the way humans do. They call APIs. They use LLMs for reasoning, MCP servers for connectivity, and internal APIs for execution.

The problem, according to Salt Security, is that legacy Web Application Firewalls were built to monitor human traffic patterns: predictable sessions, static signatures, rate limits. Agent-driven API behavior is high-frequency, tool-calling, and LLM-driven, with agents dynamically creating undocumented endpoints or leveraging MCP servers outside the security team’s visibility. The result is what the report calls “Shadow AI” blind spots.

The Confidence Gap

The disconnect between board-level concern and tooling effectiveness is stark. Nearly 80% of boards are asking questions about AI security. Fewer than one in four security teams believe they have the tools to answer those questions. The report argues this gap requires two new product categories: Agentic Security Posture Management (continuous discovery and governance of agent infrastructure from code to runtime) and Agentic Detection and Response (behavioral analysis that moves beyond static signatures to identify malicious intent in non-deterministic agent behavior).

Timing and Context

The report lands in a week already dense with AI agent security developments: Palo Alto Networks completed its acquisition of Koi to define Agentic Endpoint Security as a product category, Norton launched consumer-grade AI Agent Protection in beta, and Ledger published a hardware root of trust roadmap for agent security. Salt Security’s data provides the enterprise survey evidence that explains why all of these companies are investing simultaneously. When half of enterprises cannot see what their agents are doing, the security market responds.