What Commercial Lawyers in India Think Businesses Should Prepare For?

AI Regulation in India

AI Regulation in India: What Commercial Lawyers in India Think Businesses Should Prepare For?

Commercial lawyers advise businesses on regulatory compliance, contractual risks, and governance frameworks. With the rapid deployment of AI tools across industries, commercial lawyers increasingly assist organisations in assessing how emerging AI regulations apply to their operations.

The drafting committee on AI governance constituted by the Ministry of Electronics and Information Technology (MeitY) recently published the ‘India AI Governance Guidelines’ on 15 February 2026 during the AI Impact Summit 2026. Focusing on “AI for All”, the guidelines have been articulated to ensure that AI is not concentrated in a handful of firms or geographies but is instead leveraged through public digital infrastructure, indigenous model development, and affordable compute to drive productivity and inclusive growth. In light of these guidelines, our commercial lawyers in India have collated a list of items that businesses should prepare for:

  • Businesses should anticipate structured engagement with newly proposed institutions such as the AI Governance Group and the AI Safety Institute, which are expected to play a central role in setting technical standards, risk benchmarks, and compliance expectations.
  • With the guidelines recommending an India-specific AI risk assessment and classification framework, a national federated AI incident reporting mechanism, an AI incidents database, and human oversight mandates in sensitive sectors, businesses must map AI systems by risk level, maintain internal incident logging mechanisms, establish AI harm reporting workflows, and be prepared for regulator-facing transparency.
  • SaaS companies, startups, and enterprises integrating third-party AI need to be mindful of the graded liability across the AI value chain. Liability is slated to be proportional to the function, risk, and due diligence of AI actors. While the due diligence standards may vary, documentation of safeguards could reduce risk.
  • Businesses using GenAI must prepare for watermarking, traceability, faster takedown timelines, and synthetic content disclosures.
  • The guidelines explicitly recommend privacy-enhancing technologies, machine unlearning capabilities, algorithmic auditing systems, and automated bias detection, which means businesses will be required to invest in bias audits, data lineage tracking, and internal AI governance boards.

In the absence of a dedicated AI statute, commercial legal advisors in India currently rely on a combination of existing laws and regulatory instruments, the key acts and provisions relating to artificial intelligence being outlined below.

 

  • The IT Rules were amended on February 20, 2026, to introduce the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rule 2026 to specifically deal with AI-generated and deepfake content. It is aimed at curbing the spread of misinformation, identity fraud and deception, non-consensual intimate imagery, obscenity, child sexual abuse material, reputational harm, coercion, extortion, etc. The amended rules seek to achieve this through clear definitions of synthetically generated information (SGI), user awareness and accountability obligations through advisories and warnings for SGI-enabling intermediaries, a due diligence framework, labelling and provenance metadata, deployment of reasonable and appropriate technical measures, enhanced obligations for Significant Social Media Intermediaries (SSMIs), and faster response and victim protection mechanisms.
  • The Digital Personal Data Protection Act, 2023 (DPDP Act) and the DPDP Rules, 2025 do not regulate AI directly, but they function as India’s most important indirect regulatory framework for AI because most AI systems rely on personal data collection, training datasets, profiling, and automated processing. AI developers and deployers become Data Fiduciaries or Data Processors when their systems process personal data. If data is used for training models or personalizing AI outputs, companies must obtain free, specific, informed, and unambiguous consent. This affects AI training, advertising, and related activities. The legislation also requires companies using AI to disclose whether data is used for automated analytics, AI-based profiling, or algorithmic decision-making. AI systems must therefore allow data traceability and deletion. Purpose limitation and data minimisation must also be ensured. Large AI companies or platforms may be classified as Significant Data Fiduciaries (SDFs). Edtech entities and those targeting children must additionally install mechanisms for age verification, parental consent, and ensure that there is no tracking or behavioural monitoring of children.
  • While the Bharatiya Nyaya Sanhita (BNS), 2023 also does not specifically regulate artificial intelligence, several provisions indirectly govern AI-enabled offences, particularly those involving deepfakes, identity theft, misinformation, cheating, and automated fraud. For instance, AI-enabled cheating, especially in cases such as deepfake investment scams or AI-generated phishing, would fall under Section 318. Similarly, forgery and fabricated digital content would fall under Sections 336–338. Defamation and criminal intimidation may also be carried out through AI tools and hence be regulated under the BNS.
  • The Ministry of Electronics and Information Technology (MeitY) issued the first formal regulatory advisory specifically addressing AI platforms and generative AI models in March 2024, extending due diligence obligations under the IT Rules to AI systems. A clarification was subsequently issued focusing on transparency, user awareness, and platform accountability.
  • The Securities and Exchange Board of India (SEBI) has gradually introduced disclosure requirements for the use of AI in the securities market. Through circulars issued on 4 January 2019 (for intermediaries) and 9 May 2019 (for entities in the mutual fund ecosystem), SEBI required reporting of the use of AI applications and systems. This framework was strengthened through a June 2024 circular mandating quarterly disclosure by mutual funds using AI. Subsequently, regulations dated December 2024 and guidelines issued in January 2025 require investment advisers and research analysts to disclose AI usage in their operations, while regulations issued in February 2025 make intermediaries solely responsible for the privacy, security, integrity, and legal compliance of data and outputs generated through AI tools.
  • Separately, the Reserve Bank of India (RBI) addressed AI governance through its August 2025 report on the Framework for Responsible and Ethical Enablement of AI (FREE-AI) in the financial sector.
  • The Department of Telecommunications also unveiled a New Standard for Fairness Assessment and Rating of Artificial Intelligence Systems in 2023, outlining procedures for assessing and rating AI systems for fairness.
  • Consumer protection and intellectual property laws also have some bearing on the regulation of artificial intelligence in India.

In this evolving regulatory landscape, corporate commercial lawyers will be required to assess the implications of both in-house and third-party AI tools on a case-by-case basis, advising businesses on how existing laws, sectoral guidelines, and emerging policies apply to their specific operations, and recommending appropriate governance frameworks and safeguards to manage current regulatory risks while positioning the organisation for future AI regulation.