Home / Under-testing AI Models: MeitY’s Advisory to Tackle Potential Risks
Under-testing AI Models: MeitY's Advisory to Tackle Potential Risks
- March 7, 2024
- Gaurav Sahay
A few days ago, in response to a user’s prompt as to whether the Indian Prime Minister was fascist, Google’s AI platform, Gemini responded that he has been “accused of implementing policies some experts have characterised as fascist” by listing certain factors. A screenshot of this response was posted on X and shortly thereafter, MeitY came out with its advisory.
Advisory on AI Models – Due Diligence under IT Rules, 2021
MeitY’s advisory obligates intermediaries to ensure that their AI tools do not allow any bias or discrimination or threaten the integrity of the electoral process. Further, intermediaries allowing the synthetic creation, generation, or modification of content (text, audio, visual, or audio-visual) in such a manner that the information may be used “potentially as misinformation or deepfake” have to label the content. Alternatively, such content may be embedded with a permanent unique metadata or identifier. To deal with misinformation and deepfakes, the Ministry released advisories in November and December of last year, urging intermediaries to comply with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
The Ministry’s latest advisory states, “The use of under-testing/unreliable Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated”.
As part of the due diligence requirements prescribed under the 2021 Rules, an intermediary is mandated to make reasonable efforts by itself and to cause the users of its computer resources to not host any of the content listed under Rule 3(1)(b). The advisory directs intermediaries to inform their users of the consequences of non-compliance with rules and regulations, privacy policy or user agreement.
The intermediaries are expected to comply with this advisory and submit an action taken-cum-status report to the Ministry within 15 days.
Amid concerns about the advisory’s impact on startups on account of the resources involved in testing, the Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, in an X post, specified that permission under the advisory has to be obtained by large platforms and not startups.
Remarks
The notification reflects a national concern for the potential risks associated with under-testing or unreliable AI models, particularly in the context of their availability to users on the Indian Internet. As such an explicit permission requirement from the Government of India before deploying under-testing or unreliable AI models is a proactive approach to mitigate potential harms. It suggests a regulatory oversight to ensure that only AI systems meeting certain standards are made accessible to users. From the perspective of “Labelling of Fallibility” it’s crucial for users to be aware of the limitations of AI systems, especially if they are under-tested or unreliable. Proper labelling can help manage expectations and encourage users to critically assess the outputs generated by these systems.
Implementation of a consent popup mechanism to inform users about the potential fallibility or unreliability of AI-generated outputs adds an additional layer of transparency. It empowers users to make informed decisions about whether to proceed with using the AI system. Overall, these measures aim to balance the benefits of AI technology with the need to safeguard against potential risks and ensure user safety and trust. However, effective implementation and enforcement of such regulations would be crucial to their success in practice.
MeitY’s advisory obligates intermediaries to ensure that their AI tools do not allow any bias or discrimination or threaten the integrity of the electoral process. Further, intermediaries allowing the synthetic creation, generation, or modification of content (text, audio, visual, or audio-visual) in such a manner that the information may be used “potentially as misinformation or deepfake” have to label the content. Alternatively, such content may be embedded with a permanent unique metadata or identifier.
Related Posts

Key Disclosure Rule Challenged: Why It Matters For Listed Companies

Cleared for Takeoff: India’s Long-Awaited Cape Town Act

Buyback Fraud: SAT’s Take on Compliance Officer’s Liability
