A parliamentary panel has recommended that the government explore a comprehensive law to regulate AI, even as the government maintained that existing laws are sufficient to deal with emerging risks. The recommendations are part of a report finalised by the standing committee on communications and information technology that is expected to be tabled in Parliament (REUTERS)

The recommendations are part of a report finalised by the standing committee on communications and information technology that is expected to be tabled in Parliament. The panel met ministry of electronics and information Technology (MeitY) officials on Friday to consider and adopt the report.

In the report, which has been reviewed by HT, the committee while admitting that the government is doing its bit to prevent misuse of AI “for financial frauds or intimidation or for deepfake audio-videos etc”, said it still felt the need “to explore the possibility of a comprehensive legislation to prevent the misuse of AI”.

This stands in contrast to the government’s position that a combination of existing laws such as the Information Technology (IT) Act, the Digital Personal Data Protection (DPDP) Act, and India’s criminal laws already covers risks such as bias, misinformation and privacy harms.

The government told the committee that India, with over 900 million internet users, is already the world’s second-largest online market. AI is expected to add $450-500 billion to India’s GDP by 2025, and nearly $967 billion by 2035, accounting for around 10% of the country’s projected $5 trillion economy. Officials also told the committee that AI and automation could generate around 4.7 million new tech jobs by 2027, a figure comparable to the current size of India’s IT workforce.

Recently, MeitY amended IT Rules to bring synthetically generated information (SGI) or AI generated content under its ambit. The rules require platforms to clearly label AI-generated content, embed metadata for traceability, and ensure users disclose when content is synthetic. The new rules also introduce much stricter timelines, including takedown of unlawful AI content within two to three hours, while reducing grievance redress timelines.

In response to concerns raised by the committee over whether there is a need to restrict foreign AI models such as Grok, ChatGPT and DeepSeek in the Indian market as they train on local data, MeitY acknowledged that foreign AI companies do have access to Indian data for training their models, but added that many global AI systems are trained largely on English-language and global internet datasets, which do not adequately reflect India’s linguistic and cultural diversity.

MeitY also added that existing safeguards under the DPDP Act apply to these platforms, requiring user consent for data processing, transparent privacy policies, and rights such as access and deletion. It added that cross-border data transfers are restricted only to notified countries, and entities handling Indian data overseas must ensure adequate protection, with these rules applying to both domestic and foreign AI platforms.

The ministry also acknowledged that companies building large language models often do not disclose the datasets used for training. It cited ongoing lawsuits including cases involving OpenAI, including in India, where courts are yet to decide whether training AI systems on copyrighted material without permission is legally permissible.

Indian news agency ANI filed a case against OpenAI in the Delhi High Court in late 2024, where it accused the maker of ChatGPT of using copyrighted content without permission to train its AI models. The case has since expanded, with the Digital News Publishers Association (DNPA), which represents several major media houses, intervening and alleging that OpenAI is scraping, storing and reproducing news content without licences or attribution, potentially undermining the business of digital journalism. The case is ongoing.

Separately, the Department for Promotion of Industry and Internal Trade (under the commerce ministry) has instituted a committee to review the intersection of generative AI and copyright law. It released Part 1 of its working paper on Generative AI and Copyright in December 2025, proposing a mandatory system that allows AI developers to use lawfully accessed copyrighted content for training, ensuring creators receive compensation. MeitY endorsed this view.