Microsoft’s Power Platform team published a governance framework on April 1 for managing enterprise AI agents, arguing that the core challenge is neither technology safety nor adoption speed but outdated governance models that force organizations into binary choices: lock everything down or figure it out later.

The framework, developed through a conversation with Futurum analyst Fernando Montenegro, proposes three graduated risk zones for agent deployments:

  • Low risk: Self-serve scenarios with tight guardrails, limited data access, and limited sharing. Teams build without opening tickets; IT stays out of the critical path.
  • Medium risk: Broader sharing, more sensitive data, and more consequential actions. These trigger review and oversight without requiring heavyweight governance on every idea.
  • High risk: Business-critical workflows tied to core systems, requiring deliberate control, specific access boundaries, and defined oversight from day one.

Platform Enforcement Over Policy Documents

The central argument is that governance only works when enforced by the platform itself, not through policy decks, emails, or spreadsheets. Microsoft’s “managed environments” capability in Power Platform applies inventory tracking, usage insight, connector governance, and lifecycle management across apps, automations, and agents built in Copilot Studio.

One specific control: sharing limits paired with promotion gates. An agent built for an individual or immediate team carries one risk profile. An agent being shared broadly triggers a different one, requiring deliberate promotion, review, and accountability before scaling.

The blog also makes a point about agent permissions that teams building autonomous systems should internalize: agents generally operate as the calling user. They don’t gain new permissions. When an agent exposes an access problem, that problem already existed in the organization’s identity and access management. Fixing agent governance without fixing the underlying permission sprawl addresses a symptom, not the cause.

Timing and Context

The post arrives as Microsoft’s 2026 release wave 1 rolls out from April through September, bringing admin controls for agent security, real-time risk assessment in Copilot Studio, and AI-powered governance agents that automate tenant monitoring. The governance framework blog provides the conceptual model that these product features are designed to enforce.

For teams evaluating enterprise agent platforms, the practical takeaway is that Microsoft is treating governance as an infrastructure feature baked into the platform rather than a compliance layer added after deployment. Whether that translates to fewer incidents will depend on how organizations implement the risk classification in practice.