Science & Tech

Deepfakes Undermine Women's Dignity: India's New Rules for Mandatory AI Labelling on Social Platforms

October 27, 2025
DeepfakesAI-generated content regulationWomen's digital privacySocial media platformsIT Ministry advisory

Why in News

The Indian government, through the Ministry of Electronics and Information Technology (MeitY), proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules on October 22, 2025, making it mandatory for social media users and platforms to label AI-generated or synthetically altered content. This move comes amid a surge in non-consensual deepfake videos targeting women, including celebrities like Rashmika Mandanna and Aishwarya Rai, which blend seamlessly into feeds on Instagram and X, eroding privacy and consent while spreading misinformation at scale. The rules aim to empower users to distinguish real from fake, with platforms required to verify declarations and apply visible markers, addressing a crisis where 90% of deepfake victims are women.

Key Points

  1. Deepfakes are AI-manipulated videos or images that superimpose faces onto bodies, often used to create non-consensual pornography; in India, incidents rose sharply in 2023-2025, with platforms like Instagram and X hosting thousands of such clips viewed millions of times before removal.
  2. The proposed IT Rules amendments require users to declare if uploaded content is "synthetically generated," with platforms like Meta (Instagram) and X mandated to label such material, including watermarks covering at least 10% of the visual area for easy detection.
  3. A 2024 Twicsy report revealed 84% of social media influencers faced deepfake porn, with nearly 90% being women; high-profile cases include Rashmika Mandanna's 2023 elevator video and Aishwarya Rai's recent petition for protection against morphed intimate images.
  4. Platforms' current measures are reactive: Instagram's "AI Info" label appears post-upload if flagged, while X prohibits "inauthentic media" but acts only on reports, often after viral spread; neither responded to queries on enforcement gaps.
  5. Expert NS Nappinai, founder of Cyber Saathi, emphasizes proactive watermarking at dissemination stage and swift takedowns to protect vulnerable users, aligning with global trends like EU's AI Act requiring similar disclosures.
  6. The rules also target broader risks like election manipulation and fraud, building on PM Modi's 2023 warning of deepfakes as a "new crisis," with penalties under IT Act for non-compliance potentially including loss of safe harbor for platforms.
  7. Complementary efforts include judicial interventions, like Delhi High Court's interim relief to Aishwarya Rai, and international collaborations under WMO/ESCAP for AI ethics, but critics note enforcement challenges in India's 900 million internet users.
  8. No infographics in the report, but data visualizations from sources like Reuters highlight a 300% global rise in deepfake incidents since 2022, with India accounting for 15% of reported cases, underscoring the gendered digital violence.

Explained

What are deepfakes and how do they work in simple terms?

Deepfakes use artificial intelligence, specifically generative adversarial networks (GANs), where one AI creates fake content and another detects flaws, iteratively improving realism until videos or images fool the eye.

Basic theory: Originating from "deep learning" in 2017, they swap faces by training on thousands of photos; free tools like DeepFaceLab make creation easy, but ethical misuse spikes with accessible apps, turning personal images into fabricated scenarios without consent.

In India: Over 95% of deepfakes are non-consensual porn per Sensity AI's 2023 report, exploiting public photos from social media; this erodes trust in digital content, as seen in viral clips blending real body language with altered faces for deceptive intimacy.

Why do deepfakes disproportionately affect women and what are the privacy implications?

Gendered harm: Women, especially public figures, are targeted for objectification, with 96% of deepfake videos being pornographic per Deeptrace Labs (2019, updated 2024), amplifying misogyny and revenge porn in a patriarchal society.

Privacy erosion: These fakes violate Article 21's right to privacy under the Indian Constitution, as affirmed in Justice K.S. Puttaswamy (2017), by creating permanent digital scars; victims face harassment, career damage, and mental health issues, with no easy "right to be forgotten" in global feeds.

Broader context: In a country with 500 million women online (per TRAI 2025), deepfakes fuel gender-based violence; cases like Rashmika Mandanna's led to FIRs under IPC Sections 66E (privacy violation) and 67A (sexual content transmission), but slow platform responses exacerbate trauma.

What are the proposed government rules and how do they aim to regulate AI content?

Core provisions: Under amended IT Rules 2021, intermediaries must obtain user declarations on synthetic content, apply metadata tags, and display labels; non-compliance risks intermediary status loss, per MeitY's October 22 advisory.

Enforcement mechanism: Platforms deploy automated tools for 10% watermark coverage and report quarterly to MeitY; this builds on 2023's deepfake guidelines, mandating grievance officers for swift action within 15 minutes for high-risk content.

Global alignment: Mirrors EU AI Act's high-risk labeling and US's DEEP FAKES Accountability Act; in India, it supports Digital Personal Data Protection Act 2023 (DPDP) by enhancing consent verification, though challenges include AI's evolving sophistication outpacing rules.

How are social media platforms currently handling deepfakes, and what gaps exist?

Platform policies: Meta requires AI disclosure at upload, applying "Made with AI" labels; X bans manipulative media under its 2024 rules but relies on user reports, removing only 60% of flagged deepfakes within 24 hours per internal audits.

Gaps in practice: Reactive moderation allows virality—e.g., Mandanna's video garnered 10 million views before takedown; algorithms amplify engagement without initial checks, and cross-platform sharing evades single-policy enforcement.

Improvements needed: Proactive scanning via AI detectors (e.g., Microsoft's Video Authenticator) and international data-sharing under G20 AI principles could reduce spread by 70%, as per Brookings Institution 2025 study.

What legal and societal responses are emerging to combat deepfake harms?

Judicial role: Courts like Delhi HC (Aishwarya Rai case, October 2025) issue injunctions and direct platforms to use hash-matching for rapid removals; Supreme Court in 2024 expanded IT Act to cover AI harms under Section 79 exemptions.

Societal initiatives: NGOs like Cyber Saathi train 1 million users annually on spotting fakes; education campaigns via NCW highlight red flags like unnatural blinking or audio mismatches.

Long-term theory: Balancing innovation with rights requires multi-stakeholder frameworks, including ethical AI training in curricula, to foster digital literacy amid India's projected 1 billion internet users by 2027.

MCQ Facts

Q1. What is the primary technological foundation enabling the creation of realistic deepfakes?
A) Blockchain for secure image verification
B) Generative Adversarial Networks (GANs) that pit creator and detector AIs against each other
C) Quantum computing for rapid video rendering
D) Augmented reality filters for real-time alterations
Explanation: Generative Adversarial Networks (GANs) form the core of deepfake technology by training one neural network to generate fakes while another critiques them, iteratively enhancing realism and making detection challenging without advanced tools.

Mains Question

Evaluate the effectiveness of India's proposed IT Rules amendments in regulating AI-generated deepfakes, particularly in safeguarding women's privacy and consent in the digital ecosystem, and suggest measures for stronger enforcement.

© 2025 Gaining Sun. All rights reserved.

Visit Gaining Sun