The NSA, CISA, and cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom published a joint advisory on April 30 titled “Careful Adoption of Agentic AI Services,” establishing the first coordinated international security framework specifically targeting autonomous AI agent deployments.

The guidance, co-authored by the Australian Signals Directorate’s ACSC, Canada’s CCCS, New Zealand’s GCSB, and the UK’s NCSC, addresses organizations deploying agentic AI across government, defense, and critical infrastructure environments, according to ExecutiveGov.

Five Categories of Agent Risk

The advisory identifies five distinct risk categories for agentic AI systems: privilege risks (agents accessing sensitive data and systems beyond their intended scope), design and configuration risks, behavior risks (agents taking unintended autonomous actions), structural risks, and accountability risks. The agencies note that agentic systems also inherit all risks associated with the underlying large language models they run on, according to ExecutiveGov.

The distinction between agentic systems and conversational models is central to the advisory. A standard LLM processes and generates text. An agentic system has access to tools, APIs, databases, and email clients, and is trusted to decide when and how to use them. That tool use turns a language model into an actor with real-world consequences, Windows News reported.

Prompt Injection as Top Threat

The advisory identifies prompt injection as the primary threat vector. A senior CISA official said during a press briefing that “agentic AI agents are designed to reduce human workload, but if left ungoverned, they become a direct pipeline from a prompt injection to a data breach or a system compromise,” according to Windows News. The advisory notes that proof-of-concept exploits have been observed in the wild and that the barrier to entry is low.

Because agents operate with the user’s privileges, any successful prompt injection can bypass standard access controls. An email containing hidden instructions could direct an agent to forward sensitive documents, delete calendar entries, or exfiltrate data.

Three Pillars: Governance, Visibility, Restraint

The framework prescribes best practices across the full AI lifecycle: designing secure agents, developing them with security built in, managing third-party components, deploying incrementally, and operating with continuous monitoring. The three core pillars are exhaustive governance policies, continuous visibility into agent actions, and strict least-privilege enforcement, per Windows News.

Organizations are directed to deploy agentic AI incrementally and continuously assess systems against evolving threat models. The document emphasizes strong governance, rigorous monitoring, explicit accountability, and human oversight at critical decision points, according to ExecutiveGov.

Building on Prior Federal Efforts

The guidance follows earlier international AI security efforts. In 2025, CISA and allied partners issued guidance for critical infrastructure operators deploying AI in operational technology systems. NSA and international partners also previously outlined best practices for securing data across the AI lifecycle, according to ExecutiveGov. The Australian Cyber Security Centre also published the guidance on its own portal.

The Policy Signal

The coordinated Five Eyes release marks the point where agentic AI security shifts from industry best practice to government policy priority. For any organization deploying autonomous agents in regulated or sensitive environments, the guidance establishes a compliance baseline that procurement and audit teams will likely reference.