The Securities and Exchange Board of India (SEBI) has released a consultation paper seeking public comments on its proposed guiding principles for the responsible use of Artificial Intelligence (AI) and Machine Learning (ML) in the Indian securities market.
The proposed guidelines aim to optimise benefits and minimise potential risks associated with AI/ML integration.
Stakeholders are invited to submit their comments by July 11, 2025.
Key aspects of the consultation paper are as follows:
- Guiding Principles: SEBI’s proposed framework is built around the following core principles:
- Governance: Market participants using AI/ML models must establish robust governance mechanisms with skilled and experienced internal teams overseeing the performance, controls, testing, efficacy, and security of the algorithms deployed throughout their lifecycle. Senior management with relevant technical knowledge and experience must be responsible for model oversight. There is also a focus on third-party oversight, data governance, backup planning, independent audits, and periodic reviews.
- Disclosures to Clients: If market participants use AI/ML models for business operations that may directly impact their customers, such as asset management or advisory services, they must disclose key details, including product features, risks, limitations, fees, etc.
- Testing: Before deployment, testing should be conducted in an environment segregated from the live environment. Shadow testing is recommended, along with continuous monitoring throughout deployment.
- Fairness: AI/ML-based models must be fair and not discriminate against any client segment. Market participants should implement appropriate processes and controls to detect and eliminate bias from datasets.
- Data Privacy and Cybersecurity: Use of personal data must be in compliance with applicable laws. Market participants must have clear policies for data security, cybersecurity and data privacy for the usage of AI/ML-based models, and must report any technical glitches or data breaches to SEBI and other authorities.
- Tiered Approach Based on Purpose of AI Usage: SEBI proposes a tiered approach, providing for a ‘regulatory lite’ framework for the usage of AI/ML for purposes other than for business operations that may directly impact their customers. These encompass internal compliance, surveillance, cybersecurity, etc. In such cases, only a subset of obligations (including periodic reviews, testing framework, cybersecurity, etc.) would apply to the SEBI-regulated entities.
- Control Measures for Managing Risks: SEBI outlines specific risks posed by AI/ML in securities markets and proposes the following control measures:
- Malicious Use of Generative AI: Risks such as fraudulent financial statements or misleading news that could lead to market manipulation can be addressed through digital signatures, reporting obligations, and investor education.
- Reliance on Few AI Providers: Dependence on a limited number of generative AI providers can create systemic risks in case of impairment or failures. To address these risks, SEBI recommends diversification of providers, enhanced monitoring of critical vendors, and periodic reporting of third-party AI service providers.
- Herding and Collusive Behaviour: Widespread use of common models and datasets can impact financial markets. In view of this, SEBI suggests using diverse models and data sources, monitoring herding behaviour, performing algorithm audits, and introducing circuit breakers.
- Lack of Explainability: To address the challenges owing to the complexity of generative AI models, SEBI proposes maintenance of detailed AI process documentation, use of interpretable AI models or explainability tools, and mandatory human review of AI outputs.
- Model Failure/ Runaway AI Behaviour: To prevent financial instability that could result from flaws in generative AI systems, SEBI recommends implementing stress testing, volatility controls, and human oversight mechanisms.
- Non-Compliance: To reduce the risk of regulatory breaches and ensure accountability, SEBI recommends thorough testing of AI systems in controlled environments, implementation of human oversight mechanisms, and staff training on compliance risks associated with AI usage.