- Home/
- Industrial Updates/
- India New Ai It Rules
India's New IT Rules: Mandatory AI Labeling and 3-Hour Takedown Deadline
Feb 14, 2026

Introduction
The Indian government has enacted significant changes to its 2021 Information Technology Rules, effective 20 February 2026. These amendments place broad, intrusive obligations on social media providers, including Google, YouTube, Instagram, Facebook, and other digital intermediaries in India, and they represent the strongest Indian regulatory response to deepfakes and AI-generated misinformation to date.
Global platforms, including U.S.-based companies, will now have to shoulder this compliance burden as part of their operations in India, including:
Mandatory labeling of all AI-generated content
Amending Rules to formally define the term synthetically generated information (SGI) was another vital change. This term encompasses any artificially created or modified visual or acoustic indications of truth or appearance of authenticity, using or modified with AI. All GAN, LLM, or similar generated content must be labeled conspicuously so that users are immediately aware of it.
Platforms with more than 5 million users will require users to declare whether they are uploading AI-generated content. Users are expected to label all visuals with clear on-screen labels and all voice content with prefixed disclosures. For visual content, they should use embedded permanent digital markers or visible unique digital signatures that link the image to the generator. Importantly, they must not allow users to remove the label.
The rules specifically exempt standard editing functions, such as color or contrast correction, sound enhancement, formatting, and translating, provided they do not generate false or misleading information.
3-Hour Takedown Mandate
One of the most controversial features involves the dramatically shortened takedown timelines. If the order is from a court or government agency, digital platforms will only have 3 hours for compliance when the content is found to be unlawful, down from 36 hours.More sensitive content categories like non-consensual intimate imagery and nudity in deepfakes have a shorter time window: two hours instead of 24 hours.
The Digital Platforms and AI Rules, in Part 3, introduce several significantly shortened deadlines. For example, the time to redress guidelines and practices where the content of sensitive complaints normally remains on the platform has decreased from 15 days to 7 days, while other sensitive complaints are now addressed within 12 hours instead of 24 hours.
Remarks on the platform's roles
While these new obligations impose a heavy burden on digital platforms, they are also likely to make the platforms liable and vulnerable to safe harbor protections under the IT Act. This places an even greater onus on digital platforms: users will be required to declare content during upload, use automated identification systems, advise users quarterly about AI content-related penalties, and take other measures.
Defaulting on the new obligations could result in suspension of accounts, deletion of content, and the exposure of the identities of complainants to the victims, in addition to mandatory reporting to law enforcement agencies.
Implementation Speed
The aggressive timeline has come under criticism from digital policy experts. With over 20 million videos posted to YouTube alone each day, the scope of moderation required is arguably unprecedented. Some fear that this narrow timeframe will lead to excessive over-removal and have an outsized chilling effect, especially as platforms risk losing their immunities by failing to report content.
India’s requirement of a three-hour turnaround time is the fastest in the world. Even the EU Digital Services Act, with its strict approach, only requires a one-hour takedown window for terrorist content. Experts cite concerns about the impact of such expedited content removal on freedom of expression and the technical challenges of implementation.
The Future
These amendments significantly tighten India’s IT Rules to address the growing risks of AI-generated misinformation and deepfakes. Effective 20 February 2026, the amendments mandate clear labeling of AI-generated content and impose a strict three-hour takedown deadline on digital platforms, fundamentally reshaping platform compliance and liability obligations in India.
The key test for these regulations will be whether platforms can strike a balance between speed and accuracy while also addressing concerns about potential abuse of an expedited takedown process. The world is closely watching India as other countries contemplate the optimal approach to managing AI-generated content on social media platforms.
Q
Tell us how can we assist you?
We are always happy to answer any questions!
- General Enquiry: +91 93429 02804
- Sales Enquiry: +91 93429 02803
- Landline: +91 422 435 4854
- Whatsapp: +91 93429 02803