News

Harm Mitigation as the Core Regulatory Principle for AI Governance

On 6th January, 2024, the Ministry of Electronics and Information Technology (MEITY) issued a report on the development of AI governance guidelines. The report outlines a “coordinated, whole-of-government approach” to ensure compliance and strengthen governance as India’s AI ecosystem continues to develop.

This report is particularly significant as it offers the most comprehensive insight so far into the government’s vision for AI in India. The Centre appears to aim for a balance between maximizing the benefits of AI investments and implementing self-regulatory frameworks to mitigate potential risks effectively.

In its year-end review of 2024, Meity emphasized that it will focus on developing AI regulation. Under the leadership of India’s Principal Scientific Advisor (PSA), a multi-stakeholder advisory group was established to develop an “AI for India-Specific Regulatory Framework.” This group includes representatives from various ministries and is tasked with guiding AI governance efforts and providing insights to ensure sustainable and ethical AI development.

Building on this effort, the Advisory Group constituted a Subcommittee on November 9, 2023, tasked with analyzing existing gaps and recommending a comprehensive governance framework for Artificial Intelligence (AI). 

Following detailed deliberations, the Subcommittee released a report outlining key recommendations to shape the future of AI governance in India. These recommendations are grounded in an in-depth review of the current legal and regulatory landscape and reflect an independent perspective on promoting AI innovation while safeguarding public interests.

Currently, in India, several key frameworks provide foundational principles for responsible AI development, like Principles of Responsible AI (2021), Operationalising Principles (2021), and the FRT Report (2022) by NITI Aayog; Ethics Guidelines for AI in Biomedical Research and Healthcare by ICMR; Tamil Nadu’s Safe & Ethical AI Policy (2020) and Telangana’s AI Procurement Guide; TEC’s Voluntary Standards for Fairness Assessment and Robustness of AI Systems in Telecom and Nasscom’s Responsible AI Resource Kit (2022); and Guidelines for Generative AI (2023).

The report emphasizes that regulation should aim to minimise the risk of harm. Even enabling innovation is a minimisation of harm, as people may not be able to innovate due to a lack of clarity or a gap in law. Therefore, harm mitigation should be the core regulatory principle while operationalising the seven principles discussed in this report

Considerations to Operationalize AI Governance Principles

The Subcommittee on AI Governance has outlined three key considerations to guide the operationalization of AI governance principles in India, ensuring a robust, adaptable, and scalable framework.

  1. Lifecycle Approach to AI Systems
    Governance should consider the entire lifecycle of AI systems, encompassing three stages:
  • Development: Focus on the design, training, and testing phases.
  • Deployment: Examine the implementation and operational use of AI systems.
  • Diffusion: Evaluate the long-term impact of widespread AI adoption across sectors.
    This lifecycle approach recognizes that risks and governance needs vary at different stages.
  1. Ecosystem Perspective of AI Actors
    AI governance should adopt a holistic view, considering the diverse actors involved in the AI ecosystem, including:
  • Data Principals and Providers
  • AI Developers (e.g., Model Builders)
  • AI Deployers (e.g., App Builders, Distributors)
  • End-users (businesses and citizens)
    Traditional governance methods often focus on isolated actors, limiting effectiveness. An ecosystem approach promotes clarity in roles, responsibilities, and liabilities, fostering collaboration and better outcomes across the value chain.
  1. Techno-Legal Approach for Governance
    To address the complexity and rapid evolution of AI systems, a “techno-legal” strategy is recommended. This combines legal and regulatory measures with technology tools to:
  • Mitigate risks and scale compliance across the ecosystem.
  • Automate monitoring and enforce regulatory obligations through technologies like blockchain tracking, AI compliance systems, and smart contracts.
  • Create tools such as “consent artefacts” to assign unique identities to participants, establish liability chains, and enable self-regulation within the ecosystem.

The approach emphasizes periodic reviews of automated tools to ensure accuracy, fairness, and compliance with fundamental rights like privacy and free speech.

The report also presents six key recommendations:

  1. The overall purpose of this Committee/ Group should be to bring the key institutions around a common roadmap and to coordinate their efforts to implement a whole-of-government approach. The Committee/ Group should have a mix of official and non-official members and external experts for discussions to understand and take on board diverse perspectives.
  2. Set up a Technical Secretariat under MeitY to act as a technical advisory body and coordination hub for the Committee/Group.
  3. Develop and maintain an AI incident database through the Technical Secretariat to document real-world issues, build evidence of risks, and guide harm mitigation strategies.
  4. Foster industry collaboration by encouraging voluntary transparency commitments across the AI ecosystem, particularly for high-capability systems.
  5. Investigate and implement technological solutions to address AI-related risks, enabling real-time identification and monitoring of negative outcomes across various sectors.
  6. Establish a sub-group to collaborate with MeitY in proposing measures under the Digital India Act (DIA) to enhance the legal framework, regulatory and technical capacities, and adjudicatory mechanisms for digital industries, ensuring effective grievance redressal and improved ease of doing business.

The report is released for public consultation to align governance mechanisms with India’s aspirations. This initiative aims to foster the development of a robust, inclusive, and adaptive framework to address the challenges and opportunities of technological advancements. Stakeholders are invited to provide their feedback by January 27, 2025.