Skip to content
  • Expertise
    • Agribusiness
    Sectors
    • Agribusiness
    • Automotive
    • Consumer Goods & Retail
    • Education
    • Food & Beverages
    • Healthcare & Life Sciences
    • Information Technology
    • Start-ups
    • Textiles, Apparels & Accessories
    Practice Areas
    • CORPORATE
      • Antitrust & Competition Law
      • Banking & Finance
      • Capital Markets, REITs & InvITs
      • Corporate Law
      • Commercial Law
      • Family Business
    • EMPLOYMENT LAW
      • Anti Sexual Harassment and Discrimination at Workplace
      • Compliance Audit
      • Disciplinary Proceedings
      • Employment & Termination
      • Health, Safety & Environment
      • Immigration
      • Labour Law
      • Labour & Industrial Disputes
      • ORGANIZATIONAL POLICIES
    • Intellectual Property
      • COPYRIGHTS
      • DESIGNS
      • PATENTS
      • TRADEMARK
      • IP TRANSACTIONS
      • MEDIA, ENTERAINMENT & GAMING
      • SPORTS LAW
    • REAL ESTATE
      • Residential, Commercial & Township Development
      • Corporate Real Estate
      • Data Center, Warehousing & Logistics
      • Construction Finance
      • Environment
      • Industrial Unit, Park, SEZ & IT Park
      • Renewable Energy
      • Hospitality
      • RERA
    • BANKRUPTCY & INSOLVENCY
    • DISPUTE RESOLUTION
      • Arbitration
      • Constitutional Law
      • Consumer Disputes
      • Economic Offences
      • Litigation
      • Mediation
      • Quasi-Judicial Representation
    • TAX
      • Direct Tax
      • Indirect Tax
      • International Tax & Transfer Pricing
      • Tax Litigation & Representation
      • Tax Compliance
    • MERGERS & ACQUISITIONS
      • Divestment
      • Joint Venture
      • M&A and Cross Border Transactions
      • Restructuring & Reorganization
      • Strategic Alliance
    • TECHNOLOGY LAW
      • Artificial Intelligence
      • Cyber Security
      • FinTech
      • Digital(Internet) & E-commerce
      • Privacy & Data Protection
      • Space Law
    • Funds
      • Project Finance, Debt Funding & Restructuring
      • Fund Formation & Structuring
      • Investment Funds
      • Private Equity, Venture Capital & P2P Funding
    • RENEWABLE ENERGY
      • Clean Tech
      • Hydro Power
      • Solar Energy
      • Wind Energy
    • INFRASTRUCTURE & ENERGY
      • Airport
      • Defense & Aviation
      • Healthcare & Education
      • IT & Telecom
      • Logistics & Warehousing
      • Oil & Gas
      • Smart Cities
      • Tourism
      • Thermal Energy
      • Transportation (Roads, Bridges, Railways & Metro)
      • Urban Infra
    • Government, Regulatory and Compliance
      • Advisory
      • Corporate Governance
      • FSSAI
      • FDI & ODI
      • Insurance
      • International Trade & WTO
      • PSU Divestment & Tender
      • Public Policy
      • Regulatory Compliance
  • People
  • About FM
logo
Edit Content
  • 125 Years of Trust
  • Blogs
  • News & Insights
  • Podcast
  • Careers
  • Locations
  • FM Foundation
  • Contact Us
  • 125 Years of Trust
  • Blogs
  • News & Insights
  • Podcast
  • Careers
  • Locations
  • FM Foundation
  • Contact Us

Home / Under-testing AI Models: MeitY’s Advisory to Tackle Potential Risks

Under-testing AI Models: MeitY's Advisory to Tackle Potential Risks

  • March 7, 2024
  • Gaurav Sahay

The Ministry of Electronics and Information Technology (MeitY) released an advisory on March 1, 2024, specifying that permission from the government has to be obtained before the under-testing or unreliable artificial intelligence (AI) models are made available to users on the Indian internet.[1]

A few days ago, in response to a user’s prompt as to whether the Indian Prime Minister was fascist, Google’s AI platform, Gemini responded that he has been “accused of implementing policies some experts have characterised as fascist” by listing certain factors. A screenshot of this response was posted on X and shortly thereafter, MeitY came out with its advisory.

Advisory on AI Models – Due Diligence under IT Rules, 2021

MeitY’s advisory obligates intermediaries to ensure that their AI tools do not allow any bias or discrimination or threaten the integrity of the electoral process. Further, intermediaries allowing the synthetic creation, generation, or modification of content (text, audio, visual, or audio-visual) in such a manner that the information may be used “potentially as misinformation or deepfake” have to label the content. Alternatively, such content may be embedded with a permanent unique metadata or identifier. To deal with misinformation and deepfakes, the Ministry released advisories in November and December of last year, urging intermediaries to comply with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

The Ministry’s latest advisory states, “The use of under-testing/unreliable Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated”.

As part of the due diligence requirements prescribed under the 2021 Rules, an intermediary is mandated to make reasonable efforts by itself and to cause the users of its computer resources to not host any of the content listed under Rule 3(1)(b). The advisory directs intermediaries to inform their users of the consequences of non-compliance with rules and regulations, privacy policy or user agreement.

The intermediaries are expected to comply with this advisory and submit an action taken-cum-status report to the Ministry within 15 days.

Amid concerns about the advisory’s impact on startups on account of the resources involved in testing, the Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, in an X post, specified that permission under the advisory has to be obtained by large platforms and not startups.

Remarks

The notification reflects a national concern for the potential risks associated with under-testing or unreliable AI models, particularly in the context of their availability to users on the Indian Internet. As such an explicit permission requirement from the Government of India before deploying under-testing or unreliable AI models is a proactive approach to mitigate potential harms. It suggests a regulatory oversight to ensure that only AI systems meeting certain standards are made accessible to users. From the perspective of “Labelling of Fallibility” it’s crucial for users to be aware of the limitations of AI systems, especially if they are under-tested or unreliable. Proper labelling can help manage expectations and encourage users to critically assess the outputs generated by these systems.

Implementation of a consent popup mechanism to inform users about the potential fallibility or unreliability of AI-generated outputs adds an additional layer of transparency. It empowers users to make informed decisions about whether to proceed with using the AI system. Overall, these measures aim to balance the benefits of AI technology with the need to safeguard against potential risks and ensure user safety and trust. However, effective implementation and enforcement of such regulations would be crucial to their success in practice.

References:

[1] In supersession of said advisory, MeitY came out with another advisory on March 15, 2024, doing away with the requirement to obtain government permission, for making such under-testing or unreliable AI models available to users. Click here to access the latest advisory.

Image Credits:

Photo by Supatman on Canva

MeitY’s advisory obligates intermediaries to ensure that their AI tools do not allow any bias or discrimination or threaten the integrity of the electoral process. Further, intermediaries allowing the synthetic creation, generation, or modification of content (text, audio, visual, or audio-visual) in such a manner that the information may be used “potentially as misinformation or deepfake” have to label the content. Alternatively, such content may be embedded with a permanent unique metadata or identifier.

Related Posts

Key Disclosure Rule Challenged: Why It Matters For Listed Companies

Key Disclosure Rule Challenged: Why It Matters For Listed Companies

Vivek Jha July 15, 2025
Cleared for Takeoff: India’s Long-Awaited Cape Town Act

Cleared for Takeoff: India’s Long-Awaited Cape Town Act

Chandrasekaran R June 30, 2025
Buyback Fraud: SAT's Take on Compliance Officer’s Liability

Buyback Fraud: SAT’s Take on Compliance Officer’s Liability

Jeevanandham Rajagopal June 19, 2025
Reimbursing Mineral Royalty Hikes: A Policy Perspective

Reimbursing Mineral Royalty Hikes: A Policy Perspective

Chandrasekaran R June 11, 2025

POST A COMMENT

Posted in INFORMATION TECHNOLOGY, TECHNOLOGY LAW | Tagged advisoryonAI, AI, AIadvisory, AIcompanies, AIgeneratedcontent, AIplatform, AItools, artificialintelligence, contentlabelling, deepfakes, duediligence, fma, foxmandal, governmentpermission, india, informationtechnology, informationtechnologyrules, intermediaries, ITrules2021, largelanguagemodels, LLMs, meity, meityadvisory, ministryofelectronicsandinformationtechnology, misinformation
  • 125 Years of Trust
  • Blogs
  • News & Insights
  • Podcast
  • Careers
  • Locations
  • FM Foundation
  • Contact Us
Follow Us:

© 2025 Fox Mandal Foundation. All rights reserved.
Powered by Origamicreative
Disclaimer Privacy Policy