The Office of the Principal Scientific Adviser (OPSA) to the Government of India, in collaboration with the iSPIRT Foundation and the Centre for Responsible AI (IIT Madras), convened a high-level roundtable on “Techno-Legal Regulation for Responsible, Innovation-Aligned AI Governance”. The roundtable was organised as an official pre-summit event ahead of the India AI Impact Summit 2026 and was chaired by Prof. Ajay Kumar Sood, Principal Scientific Adviser (PSA) to the Government of India.
The roundtable was joined by Dr. Preeti Banzal, Adviser/ Scientist ‘G’, Office of PSA; Kavita Bhatia, Scientist ‘G’ and Group Coordinator, Ministry of Electronics and Information Technology, Government of India; Hari Subramanian, Volunteer, iSPIRT Foundation, and Co-founder & CEO, Niti AI; Prof. Balaraman Ravindran, Head, Centre for Responsible AI, IIT Madras; Prof. Mayank Vatsa, Professor, IIT Jodhpur; Jhalak Kakkar, Director, Centre for Communication Governance, National Law University, Delhi; and Abilash Soundararajan, Founder & CEO, PrivaSapien, among other senior stakeholders and subject-matter experts.
Setting the context, Dr Banzal spoke about India’s approach to techno-legal regulation, emphasising the importance of having a practical implementation mechanism and setting exemplary pathways to AI Governance, enabling policy mechanisms, capacity building, and global cooperation.
In his keynote address, Prof. Sood talked about India’s readiness to adopt a techno-legal approach to AI governance, highlighting the need to embed legal and regulatory principles directly into AI systems to ensure accountability, transparency, data privacy, and cybersecurity by design. Prof. Sood encouraged participants to evaluate all plausible ways for creating a techno-legal governance framework.
The co-moderators, Subramanian and Prof. Ravindran, discussed key challenges and metrics, including data protection, leakage risks, differential privacy, accuracy, and throughput, noting the trade-offs between privacy and system performance. They underlined the importance of equity in access, data sovereignty, and broader economic and strategic considerations.
The experts highlighted the need for robust data privacy and consent mechanisms across AI training, inference, and deployment, convergence with the DEPA framework, and the adoption of compliance-by-design architectures to support the global scalability of Indian AI governance models. The discussions also addressed regulatory responses to non-deterministic AI systems and AI-generated content, including copyright concerns, while underscoring the challenges of operationalising techno-legal frameworks for AI governance. Participants emphasised that AI model robustness must be balanced against technical and socio-economic trade-offs, and that emerging solutions should be practical, accessible, and consumable at the end-user level.
The discussions underscored the need to develop a standardised evaluation framework for responsible AI across the full lifecycle of AI systems, translate these insights into effective policy levers, and embed safety and governance measures directly into AI technology stacks to mitigate risks and promote equitable access.


