The Trump administration published its National Policy Framework for Artificial Intelligence in March 2026. The framework’s central goal is to prevent a patchwork of state-level AI regulations that could impede US competitiveness, while maintaining what it calls “fundamental national protections.” In a Forbes analysis published May 8, cybersecurity analyst Chuck Brooks argues the framework correctly identifies the stakes but leaves the hardest problem unsolved: how to govern systems that operate faster than any human can review.
The Governance Latency Problem
Brooks frames the core tension clearly: traditional governance frameworks are “temporal and anthropocentric,” designed for humans making decisions at human speed. Agentic AI is “perpetual and machine-centric.” An agent processing insurance claims, executing trades, or managing infrastructure doesn’t pause for quarterly compliance reviews.
This mismatch creates what Brooks calls a governance latency gap. In critical systems, that gap has operational, legal, and geopolitical consequences. A compromised agent can propagate disturbances across interconnected systems faster than any incident response team can react.
The NIST AI Risk Management Framework, which the federal policy references, was built for a pre-agent world. It needs expansion to cover autonomous decision-making and agent-to-agent interactions, Brooks argues in Forbes.
Federal Preemption vs. Regulatory Vacuum
The framework’s clearest strength is its anti-fragmentation stance. A patchwork where California, Texas, and New York each regulate AI agents differently would create compliance nightmares for builders deploying nationally. The Trump administration’s approach mirrors the EU’s mistake in reverse: where the EU risks over-regulation through the AI Act, the US risks under-regulation by preempting state rules without replacing them with federal ones.
The practical gap is enforcement. The framework calls for “fundamental national protections” but does not define what those protections look like for autonomous systems. Who is liable when an agent makes a bad decision autonomously? The company that deployed it? The platform it runs on? The model provider? The framework does not say.
Security at Machine Speed
Brooks, writing in Forbes, identifies AI-on-AI conflict as the most concerning scenario: defensive and offensive agents creating feedback loops that exceed human oversight capacity. This is not theoretical. Wiz’s Red Agent already scans 150,000+ production applications weekly, autonomously discovering and exploiting vulnerabilities. Defensive agents are responding in kind.
The framework acknowledges this convergence, placing agentic AI alongside quantum computing, 5G, and edge computing as technologies creating a “hyper-connected environment with both opportunity and fragility.” But acknowledgment is not regulation.
The Shift Builders Should Watch
Brooks argues organizations need to move from “human-in-the-loop” to “human-on-the-loop” models, where humans monitor autonomous systems and intervene when necessary rather than approving every action. The framework implicitly supports this shift but provides no guidance on what “on the loop” means in practice: what logging is required, what decisions trigger mandatory human review, what constitutes adequate oversight.
For builders deploying agents today, the message is contradictory. The federal government wants to avoid stifling innovation with regulation. It also wants fundamental protections. Until those protections are defined, companies shipping agents are operating in a regulatory vacuum that may fill suddenly and unpredictably once the first high-profile autonomous agent failure reaches a courtroom.