INTRODUCTION
On March 1, 2024, the Indian Ministry of Electronics and Information Technology (“MeitY”) issued an Advisory for intermediaries and platforms involved in the business of Artificial Intelligence (“AI”) Technology. While termed an “advisory”, the document outlined a set of mandates, instructing intermediaries to adhere to them immediately and provide a report on action taken to the MeitY within 15 days.
As the Advisory caused uproar within India’s growing startup community, the Minister of State for Electronics and Information Technology, Mr. Rajeev Chandrasekhar, quickly issued a clarification on social media platform X, that the Advisory was primarily aimed at large tech firms and would not be applicable to startups, despite no indication to that effect in the Advisory. To further complicate matters, news emerged on March 15, 2024, of a revised Advisory from MeitY, rescinding certain aspects of the initial Advisory. This series of conflicting messages has left the tech industry grappling with uncertainty.
The resultant confusion surrounding the Advisory and its impact has sparked debate about responsibility and liability for firms deploying AI. In this explainer, we will delve into the implications of this Advisory on the AI landscape in India, while also exploring approaches adopted by other countries.
WHAT DOES THE ADVISORY SAY?
The Advisory is an addendum to a previous Advisory dated December 26, 2024, which laid down certain due diligence obligations to be met by intermediaries or platforms under the Information and Technology Act, 2000 (“IT Act”) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”). The new Advisory addresses the use and development of AI models, large language models, Generative AI, software or algorithms, by placing the following obligations on intermediaries and platforms:
1. Don’t intermediate access to unlawful content under Rule 3(1)(b) of the IT Rules or in violation of any other provision of the IT Act.
Intermediaries or platforms should ensure that their use of AI models or software does not permit users to host, display, or share unlawful content.
2. Don’t facilitate threats to the integrity of the electoral process.
Platforms should prevent bias, discrimination, or threats to the integrity of elections through the use of AI models or algorithms.
3. Testing and deployment of AI.
The original Advisory required permission from the Government to use under-testing or unreliable AI models. Based on later revisions (March 15, 2024), MeitY removed this requirement. Now, intermediaries must label such models to inform users about possible unreliability or fallibility.
4. Users should be adequately informed of the consequences of dealing with unlawful information.
Platforms must update terms of service and user agreements to ensure users understand the legal risks of engaging with unlawful content.
5. Intermediaries facilitating deepfakes must label such content.
If an intermediary facilitates the creation or modification of content that could constitute misinformation or a deepfake, it should embed metadata or a unique identifier to trace its origin and creator.
COMPARISON WITH OTHER JURISDICTIONS
India currently lacks comprehensive AI-specific legislation, leading to fragmented governance under existing laws. The MeitY Advisory is an initial step but not a complete framework. Comparing it with global regimes helps identify best practices for India’s upcoming draft AI regulation expected in July 2024.
**Singapore**
The Infocomm Media Development Authority and AI Verify Foundation issued a draft Model AI Governance Framework for Generative AI in January 2024. It outlines nine proposed principles for responsible AI but is not legally binding.
**European Union**
The Draft Artificial Intelligence Act, passed by the European Parliament on March 13, 2024, is the world’s first comprehensive AI regulation. It classifies AI systems by risk level and imposes obligations on developers and deployers.
**United States**
The U.S. lacks a single AI law. The Presidential Executive Order No. 14110 (October 30, 2023) sets standards for federal agencies and private companies regarding AI safety, transparency, and dual-use reporting.
**Key Differences Across Jurisdictions**
- **Unlawful content:** India prohibits intermediating unlawful content; the EU relies on the Digital Services Act.
- **Elections:** India bans AI interference in elections; the EU treats it as a high-risk AI category; Singapore and the U.S. lack direct provisions.
- **Under-testing AI:** India requires labelling; others focus on disclosure or lifecycle safety.
- **User information:** India mandates informing users; the EU requires deployer disclosure for high-risk AI.
- **Deepfakes:** India and the EU require labelling; Singapore suggests watermarking; some U.S. states (like California) have enacted anti-deepfake laws.
CHALLENGES AND GAPS IN THE ADVISORY
1. **An advisory that reads like a mandate.**
The IT Act does not explicitly empower MeitY to issue binding advisories for AI. The legal validity of such directives is under judicial review in multiple High Courts.
2. **Lack of clarity and transparency.**
The initial requirement for government permission to deploy AI models was withdrawn without explanation. The latest version of the Advisory is not publicly accessible, creating opacity and uncertainty for businesses.
3. **Limited scope of application.**
The Advisory applies only to intermediaries and platforms, leaving developers and startups largely unregulated. This one-size-fits-all approach contrasts with global models like the EU’s risk-based classification or Singapore’s shared accountability framework.
4. **Treatment of deepfakes.**
India’s Advisory is the first to address deepfakes directly but does not define them or distinguish between harmful and creative uses. The EU provides exemptions for artistic or satirical works to safeguard free expression.
CONCLUSION
The Advisory has triggered industry-wide concern over its legality, ambiguity, and potential to stifle innovation. The rapid issue and retraction of mandates, coupled with clarifications via social media, fuel uncertainty and risk hampering India’s AI ecosystem.
While real threats such as deepfakes and misinformation demand oversight, India needs a coherent, transparent, and consultative legislative framework. Such a framework should define AI categories, set clear obligations for both developers and deployers, and align with international best practices.
India’s forthcoming draft AI law offers an opportunity to balance innovation with accountability—drawing from the EU’s structured compliance, Singapore’s governance model, and the U.S. emphasis on security.


