Former RBI governor Raghuram Rajan recently questioned whether the world could drift into an “AI nightmare” while urging governments to prepare for multiple transformative outcomes.

While warning against exaggerated doomsday narratives, he noted in a Bloomberg interview that adoption will be gradual, allowing time to reskill and adapt.

That gradual transition, however, should not encourage policy complacency.

The speed of AI development and its capacity to reshape markets, labour, and information flows make governance mandatory.

The challenge is to mitigate harm without undermining innovation, a balance that major economies are pursuing in different ways.

Different global approaches The US has largely opted for a sector-specific and non-prescriptive approach. Rather than an overarching AI law, it relies on executive guidance, regulatory authorities, and voluntary standards.

Bodies such as the National Institute of Standards and Technology have issued risk-based frameworks to encourage trustworthy AI while preserving flexibility for innovation. This approach underscores competitiveness and rapid commercialisation.

The EU’s interventionist route prioritises rights protection and safety, even at the cost of regulatory complexity.

Its AI Act 2024 demands impact assessments, documentation, and transparency from developers and deployers of high-risk AI systems. Non-compliance can attract penalties of up to 7% of global annual turnover.

China, by contrast, has a security-first and state-centric model that tightly supervises AI deployment while encouraging domestic innovation.

Its AI governance framework combines data localisation requirements, algorithm registration & transparency rules, and content moderation obligations.

India’s unique position India’s regulatory approach has been starkly different. Rather than retrofitting safeguards after deployment, governance mechanisms have often been embedded into system design.

India’s digital public infrastructure (DPI) stack, offered at zero cost and widely adopted across sectors, is globally distinctive in both scale and interoperability — Aadhaar and UPI combine integrated oversight, authentication, and accountability. This offers useful guideposts for AI.

A rigid, compliance-heavy framework like EU’s, or heavily centralised like the Chinese, risks eroding India’s comparative advantages, particularly its entrepreneurial startup ecosystem and globally competitive IT services sector.

India needs a credible regulatory framework that is agile, adaptive, and grounded in real-world risk.

Policy options before India Recent policy signals reflect this emerging consensus. The recently released India AI Governance Guidelines emphasise sector-specific governance, India-centric risk assessments, voluntary safeguards, and the prioritisation of existing legal frameworks over rushed new legislation.

Policy experts warn against economy-wide AI laws, advocating responsible AI codes embedding safety, accountability, and explainability in design to protect smaller innovators from compliance burdens while allowing regulatory intervention in case of risks.

A case in point is RBI’s FREE-AI Committee’s report that articulates seven principles including trust, people-first design, accountability, and safety.

Treating innovation and risk mitigation as complementary, it recommended the creation of shared infrastructure and AI regulatory sandboxes to democratise AI access.

Choosing the future To avoid an AI nightmare while capturing AI’s promise for growth and social impact, India must build an AI governance framework that encourages experimentation and adoption, manages risk, and deepens inclusion while avoiding over-regulation.