India’s government announced the formation of the AI Governance and Economic Group (AIGEG) on April 17, the first dedicated inter-ministerial body for AI governance outside the EU and U.S. The move comes as Indian companies deploy autonomous AI agents in banking, payments, and supply chains without a regulatory framework designed for systems that initiate actions, not just assist decisions.
The Body
AIGEG will be chaired by Electronics and Information Technology Minister Ashwini Vaishnaw, with Minister of State Jitin Prasada as Vice Chairperson, according to the Economic Times. Members include the Principal Scientific Advisor, the Chief Economic Advisor, and the NITI Aayog CEO. Secretaries from MeitY, the Department of Telecommunications, the Department of Economic Affairs, the Department of Science and Technology, and a National Security Council Secretariat representative also sit on the body.
A Technology and Policy Expert Committee (TPEC) will provide advisory support on global developments, emerging technologies, risks, and regulation.
The Mandate
AIGEG has a broad mandate, per the Economic Times:
- Review existing mechanisms governing AI
- Study emerging risks and identify regulatory gaps
- Issue guidelines to hold firms accountable for compliance with local AI laws
- Develop a decade-long roadmap for AI deployment, including assessment of affected job profiles, geographic impact concentration, and the extent of automation versus augmentation
- Coordinate with India’s recently established AI Safety Institute on safe AI development
The underlying AI governance guidelines, released by MeitY in November 2025, called for potentially amending the Information Technology Act to classify AI systems, creating an India-specific risk assessment framework, and establishing a national database for AI-related security incidents.
Why Agents Changed the Calculus
The Economic Times Morning Dispatch framed the urgency directly: “Companies are rolling out AI agents across high-stakes sectors such as payments, banking and supply chains. Unlike traditional AI tools, these systems do not just assist decisions but initiate actions. In agent-to-agent setups, a single move by one system can trigger a chain reaction across platforms in healthcare, logistics, or finance.”
India’s financial infrastructure is a prime deployment target. UPI and IMPS handle billions of transactions monthly. The legal status of an AI agent initiating a financial transaction is undefined in Indian law. NITI Aayog has floated a risk-based sandbox approach for high-impact AI systems, but it has not been formalized.
The Global Regulatory Pattern
AIGEG is structurally similar to the EU’s AI Office, which coordinates enforcement of the AI Act. The approach treats AI agent governance as a cross-ministry coordination problem rather than a single-regulator domain. This week, the Bank of England separately confirmed it will conduct AI-specific stress tests focused on herding behavior where AI agents trained on similar data make correlated market decisions. The EU AI Act’s Annex III logging requirements for AI agent deployments take effect August 2, 2026.
India’s inter-ministerial body, the Bank of England’s stress tests, and the EU’s August deadline form a regulatory arc: governments are building governance infrastructure specifically for autonomous agents, not just AI models in general. The regulatory window for builders deploying agents in Indian financial services is still open, but AIGEG’s formation signals it is narrowing.