In a significant move to regulate the burgeoning field of Artificial Intelligence (AI) in India, the government has introduced a new set of guidelines that mandates AI platforms to obtain official permission before launching any AI-related products in the country. Union Minister Rajeev Chandrasekhar announced the new advisory on Saturday, emphasizing the government’s commitment to ensuring the responsible use of AI technologies.
The advisory, which was issued on the evening of March 1, is effective immediately and applies to all intermediaries involved in the development and deployment of AI technologies. These intermediaries are also required to submit an action taken-cum-status report to the ministry within 15 days of the advisory’s issuance.
Union Minister Chandrasekhar highlighted the necessity of establishing a rigorous framework for AI deployment, drawing parallels with other industries where safety and reliability standards are strictly enforced. “This signals that we are moving to a regime when a lot of rigour is needed before a product is launched. You don’t do that with cars or microprocessors. Why is that for such a transformative tech like AI there are no guardrails between what is in the lab and what goes out to the public,” Chandrasekhar stated.
The advisory aims to instill discipline among AI developers, preventing them from directly transferring AI models from the lab to the market without adequate safeguards. The government’s stance is clear: AI-generated content must now be labelled or embedded with a unique metadata or identifier. This measure is intended to help trace the creator or originator of any misinformation or deepfake content, thereby enhancing accountability and transparency in the digital ecosystem.
Furthermore, the Union Minister stressed the importance of consumer awareness and consent, especially when deploying AI models that may still be prone to errors. “If they want to deploy a model that is error-prone, they have to label it as under testing, take government permission and explicitly seek confirmation and consent of the user that it is an error-prone platform. They can’t come back later and say it is under testing,” he explained.
This new regulatory approach by the Indian government marks a crucial step towards fostering a safe and responsible AI landscape in the country. By introducing these guidelines, the government aims to balance the rapid advancement of AI technologies with the need to protect consumers and uphold ethical standards in the digital domain.
Leave a Reply