Amended IT rules force platforms to permanently mark AI-generated content and delete illegal material at unprecedented speed.
India is instituting a rigorous new framework for managing artificial intelligence and synthetic media online. Under amendments effective later this month, platforms must clearly label AI-generated audio and video—specifically content altered to appear realistic, such as deepfakes—and implement permanent digital markers to trace their origin.
The rules mandate the use of automated tools to detect and block specific categories of illegal AI content, including child sexual abuse material, deceptive documents, and impersonation. However, innocuous edits and accessibility features remain exempt from these definitions.
Compliance Challenges
Alongside the AI mandates, the government has reduced the deadline for removing illegal content from 36 hours to just three. Prasanto K Roy, a Delhi-based technology analyst, characterized the new framework as “perhaps the most extreme takedown regime in any democracy.”
Roy warned that while the intent behind AI labeling is constructive, the supporting technology is still maturing. Furthermore, he noted that the three-hour removal window makes compliance “nearly impossible” without delegating decision-making to machines, stripping the process of necessary legal assessment.
SOURCES: Digital Futures Lab, BBC, Technology Analyst Prasanto K Roy.
This report has been significantly transformed from original source material for journalistic purposes, falling under ‘Fair Use’ doctrine for news reporting. The content is reconstructed to provide original analysis and reporting while preserving the factual essence of the source.
