Government of India has proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The Ministry of Electronics and Information Technology (MeitY) seeks to introduce compulsory labelling and disclosure norms for artificial intelligence-generated content shared across digital platforms.
Under the proposed framework, every AI-generated image, video, or audio clip will need to carry a visible label signifying its synthetic origin. The proposed measure aims to ensure that audiences can clearly distinguish between authentic content and material created or altered through generative AI systems. For images and videos, the label must occupy a defined portion of the frame, while in the case of audio content, the disclosure must appear at the start of the recording.
Users uploading content will also be expected to declare whether the material has been generated or modified using AI tools. In parallel, digital platforms will have to maintain metadata traceability and implement automated systems capable of flagging synthetic content at the point of upload.
The proposed revisions come amid growing concerns over the misuse of generative technologies to produce deepfakes, misinformation, and manipulated media that could influence public perception and democratic processes. India, home to one of the world’s largest internet user bases, has witnessed an accelerating proliferation of AI-assisted media, prompting calls for regulatory intervention.
By enforcing visible labelling standards, the government hopes to bolster public trust in digital communications while fostering responsible AI innovation. Industry stakeholders have been invited to submit their feedback on the draft rules by early November before the amendments are finalised.
This initiative places India among the early global adopters of mandatory AI content disclosures—marking a significant stride towards ethical AI governance and transparent digital ecosystems.


