The Securities and Exchange Board of India (SEBI) has issued a comprehensive set of draft guidelines to ensure the responsible and transparent use of Artificial Intelligence (AI) and Machine Learning (ML) in India’s securities markets. The proposal emphasizes risk management, fairness, data privacy, and investor protection.
Continuous Risk Oversight and Accountability
Market participants using AI/ML must continuously assess and manage risks to ensure their systems remain robust and resilient. SEBI calls for clear procedures for error handling and fallback mechanisms to keep critical functions operational during disruptions.
Senior management with technical expertise will be made responsible for the entire AI model lifecycle, from development and validation to monitoring and compliance.
Third-Party Vendor Responsibility
The regulator highlights that firms remain fully accountable for compliance, even when outsourcing AI/ML operations to external vendors. Service-level agreements must clearly define performance expectations, monitoring processes, and remedies for poor performance.
Ongoing Monitoring and Auditing
Since AI models evolve over time, market participants are required to conduct periodic reviews and share accuracy reports with SEBI. Independent audits — by teams uninvolved in model development — will ensure transparency and fairness, with audit findings reported directly to SEBI.
Data Governance, Fairness, and Ethical Use
Firms must implement strong data governance frameworks, including ownership, access control, and encryption norms. SEBI insists on unbiased and explainable AI, requiring fair treatment of all investors and regular checks to remove data or algorithmic bias.
Training programs are encouraged to raise awareness among data scientists about bias risks, while AI models should respect user autonomy and cultural diversity.
Investor Transparency and Disclosure
Market participants using AI/ML in customer-facing services — such as algorithmic trading, asset management, or advisory services — must clearly disclose their use of AI to clients. Disclosures should outline product features, potential risks, data quality, and accuracy, using plain language to help investors make informed decisions. Investor grievance mechanisms must align with SEBI’s existing frameworks.
Testing and Model Validation
SEBI proposes rigorous pre-deployment testing in isolated environments and continuous monitoring once AI systems go live. Firms must maintain five-year documentation of models and datasets to ensure results are traceable and explainable. The guidelines also encourage shadow testing with live data to validate performance before deployment.
Data Privacy and Cybersecurity
Given AI’s dependence on vast data processing, SEBI requires strict data security, privacy, and cyber resilience policies. Any data breaches or system failures must be reported promptly to SEBI and relevant authorities under existing legal frameworks.
Tiered Implementation
A tiered regulatory approach is proposed:
-
A lighter compliance regime will apply to internal AI uses (like compliance, surveillance, or cybersecurity).
-
A stricter regime will cover business operations directly impacting clients.
Key Risks and Controls Identified
Annexures in the paper outline potential risks from malicious AI use, concentration of vendors, herding behavior, lack of explainability, and regulatory non-compliance, along with measures such as AI watermarking, diverse data sources, stress testing, and human accountability.


