News

AI Content: MeitY Notifies Much-Anticipated IT Amendment Rules

The Ministry of Electronics and Information Technology (MeitY) has notified the much-anticipated amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at addressing growing concerns around the misuse of synthetic content, including deepfakes.

Notably, the Draft Rules released in October 2025 have undergone substantial changes in their final form, underscoring an emphasis on speed and responsiveness in intermediary compliance.

Key aspects of the updated framework include:

  • Effective date: The Amendment Rules will come into effect on February 20, 2026.
  • Synthetically generated information: The Rules now define synthetically generated information as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.” While text-based information is impliedly excluded, certain express exceptions, such as routine or good-faith editing, have also been carved out. It is further clarified that, for the purposes of the Rules, any reference to ‘information’ in the context of information used to commit an unlawful act will be construed to include synthetically generated information.
  • Stricter intermediary due diligence:

a) More frequent and explicit user intimation: All intermediaries must periodically inform users, at least once every three months, that in case of non-compliance with its rules, privacy policy or user agreement, the intermediary has the right to terminate or suspend access or usage rights, or to remove or disable access to non-compliant information, or both, as the case may be, and that such non-compliance may attract penal consequences under applicable law, and mandatory reporting to appropriate authorities where required.

b) Shortened takedown timeline: Upon receipt of actual knowledge, intermediaries are required to remove or disable access to unlawful information within three hours, a sharp reduction from the earlier 36-hour window.

c) Expedited grievance redressal: The timeline for resolving user complaints has been cut from fifteen days to seven days.

d) Obligations regarding synthetically generated content: Intermediaries offering computer resources enabling creation or modification of synthetically generated information must deploy reasonable and appropriate technical measures to prevent unlawful synthetic content, and ensure lawful synthetic content is prominently labelled and embedded with a permanent metadata or other appropriate technical provenance mechanisms, to the extent technically feasible.

  • Additional requirements for SSMIs: Significant social media intermediaries (SSMIs) face additional compliance requirements, including obtaining user declarations on whether the information is synthetically generated, deploying reasonable and proportionate technical measures to verify such declarations, and ensuring that such information is labelled accordingly.

Concluding thoughts

The amendments signal a tighter regulatory approach to AI-generated content, placing greater responsibility on intermediaries to prevent misuse while allowing limited exceptions. They also introduce more time-bound intermediary obligations, including faster takedowns and expedited grievance redressal. The real test will lie in how these obligations are implemented in practice.