Follow Cyber Kendra on Google News! | WhatsApp | Telegram

Add as a preferred source on Google

Indian Government Tightens Rules for AI Content: What the New Deepfake Laws Mean for You

India's new AI content rules kick in Feb 20. Here's what changes for users, creators, and platforms with the 3-hour takedown mandate.

AI-Generated

The Indian government just rewrote the rulebook on AI-generated content. Through an official gazette notification published Tuesday, the Ministry of Electronics and Information Technology slashed the time social media platforms have to remove flagged synthetic content from 36 hours to just three—and that's only the beginning.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules 2026 come into force on February 20, introducing sweeping changes that affect everyone from casual social media users to professional content creators. 

The new regulations mandate prominent labelling of all AI content, require platforms to deploy automated detection systems, and establish strict penalties for violations.

The government's move follows mounting complaints about harmful AI-generated content flooding Indian social media. Officials specifically cited incidents involving sexualized deepfake images, including those depicting minors, on platforms like X. The Ministry of Electronics and Information Technology determined these weren't isolated cases but symptoms of a larger problem requiring immediate regulatory intervention.

Under the amendments, "synthetically generated information" now has a formal legal definition: any audio, video, or image artificially created or altered using computer resources in a way that appears real enough to deceive people about actual persons or events.

The Core Changes

Starting February 20, platforms offering AI content creation tools must ensure every piece of synthetic media carries a clear, prominent label—permanently. The rules explicitly prohibit platforms from allowing users to remove or suppress these labels once applied.

When Indian authorities or courts flag content for removal, platforms face a 180-minute deadline to comply. User grievance response times drop from 15 days to seven days. For complaints about rights violations, platforms must respond within two hours, down from 24.

Significantly, social media companies must now verify user declarations about whether content is AI-generated before publication. They're required to deploy automated detection tools and prominently display "AI-generated" warnings on confirmed synthetic media.

What This Means For You

For Everyday Social Media Users

Expect quarterly reminders from platforms about AI content rules and penalties for violations. When you scroll through Instagram, X, or Facebook after February 20, you'll start seeing "AI-generated" or "Synthetically created" labels on more content. Platforms can suspend or terminate accounts that repeatedly post unlabeled AI content or violate the new rules.

If someone creates a harmful deepfake of you, you can now expect faster takedowns and, in some cases, disclosure of the violator's identity to you as the victim.

For Bloggers and Content Creators

You must declare whether your content is AI-generated before posting. This covers AI-created images in blog posts, AI-written text portions, or AI-enhanced videos. However, routine editing—colour correction, basic enhancements, compression—doesn't count as synthetic content.

Good news: Educational content, research materials, and illustrative graphics used in good faith remain exempt, as long as they don't create false documents or misrepresent real people or events.

For Journalists and News Organisations

The rules create specific carve-outs for professional content creation. Using AI for translation, transcription, searchability improvements, or accessibility features falls outside the "synthetic content" definition. Template-based graphics, data visualisations, and illustrative content for articles also get exceptions.

But if you use AI to generate quotes, fabricate interviews, or create misleading imagery of real people or events, you're violating the law—and platforms must label such content appropriately.

For YouTubers and Video Creators

Before uploading AI-generated or AI-enhanced videos, you'll need to declare this to the platform. YouTube and similar services must then verify your declaration and apply permanent labels. Using AI for basic video editing, noise reduction, or format conversion doesn't trigger labelling requirements. But deepfakes, AI voice cloning, or synthetic footage depicting real people falsely must be clearly marked.

Violations can result in immediate video removal, channel suspension, or permanent bans. Serious offences involving child safety, non-consensual intimate imagery, or election-related misinformation trigger mandatory reporting to authorities.

For Social Media Activists and Digital Campaigners

Political deepfakes fall under heightened scrutiny under the Representation of the People Act 1951. Creating or sharing synthetic content that falsely depicts politicians, fabricates statements, or misrepresents events during election periods can attract criminal penalties beyond just account suspension.

Platforms must deploy automated systems specifically targeting political deepfakes and election-related synthetic media. Your content will face more aggressive filtering, and false declarations about AI usage can lead to identity disclosure to complainants.

For News Media Publications

Major publications classified as "significant social media intermediaries" face additional due diligence requirements. You must implement technical measures to verify user-submitted content for AI manipulation before publication. Knowingly publishing unlabeled deepfakes or failing to act on violations can strip your safe harbour protections, exposing you to direct legal liability.

The rules require you to maintain systems for identifying previously removed harmful content to prevent re-uploads—essentially creating a hash database of banned deepfakes.

The Enforcement Teeth

Violations aren't just about account suspensions. The amendments reference multiple criminal statutes: the Bharatiya Nyaya Sanhita 2023 (India's new criminal code), the Protection of Children from Sexual Offences Act 2012, the Sexual Harassment of Women at Workplace Act 2013, and others.

Platforms must report serious violations to authorities within the timeframes specified in each law. For offences requiring mandatory reporting—like child abuse material—platforms face potential liability for failure to notify police.

The rules authorise police officers of Deputy Inspector General rank or higher to issue takedown orders, creating clear enforcement chains that platforms cannot ignore.

Routine photo editing, filters on Instagram, basic colour grading, and standard video editing remain perfectly legal and don't require labelling. AI tools for accessibility—like auto-generated captions or audio descriptions—are explicitly excluded. PowerPoint presentations, PDF reports, and educational materials using template-based design also get passes.

The government tried to balance innovation concerns with safety imperatives by carving out legitimate uses while targeting deceptive deepfakes.

India joins the European Union, China, and California in establishing comprehensive AI content regulations. Unlike the EU's AI Act, which takes a risk-based approach, India's rules focus specifically on synthetic media transparency and rapid enforcement.

The three-hour takedown window represents one of the world's strictest content moderation timelines, putting India ahead of even the EU's Digital Services Act requirements. For global platforms, this means building India-specific compliance infrastructure or risking being locked out of one of their largest markets.

Starting next week, India's internet looks different—more labelled, more regulated, and theoretically safer from the worst abuses of AI-generated deception. Check the Indian Government's official Gazette notification [PDF].

Post a Comment