AI-generated defamation encompasses multiple distinct categories of harmful synthetic content. Deepfake videos use machine learning models — typically generative adversarial networks — to place a real person's face and voice into video footage of events that never occurred, producing material that appears visually and aurally authentic to most viewers. Voice cloning technology reproduces a person's voice with sufficient accuracy to create convincing audio recordings of statements the person never made. AI text generation tools produce false articles, social media posts, or review content that impersonates a person's writing style or falsely attributes statements to them. Synthetic images blend a person's actual photographs with fabricated contexts.
What Is AI-Generated Defamation and Why It Is Worse Than Traditional Defamation
The harm from AI-generated defamation is qualitatively worse than traditional defamation in several respects. First, the content is convincing to a degree that requires expert technical analysis to disprove — ordinary viewers cannot distinguish a well-crafted deepfake from genuine footage. Second, AI tools enable production at scale: a single bad actor with modest technical skills can generate hundreds of defamatory variations in hours, overwhelming any individual's capacity to monitor and respond. Third, AI-generated content often lacks the obvious signals of authorship that enable tracing in traditional defamation — there is no handwriting, no distinctive phrasing, no identifiable account history.
The speed of viral spread compounds these problems. A convincing deepfake video shared on Instagram or X can reach tens of millions of views within 24 hours of posting. By the time the subject has identified the content, engaged legal counsel, and initiated a response, the false impression has been fixed in the minds of a mass audience. This creates urgency that is categorically greater than for traditional defamation, where the content is typically text-based and discoverable before it has reached critical mass.
For public figures, executives, politicians, and anyone whose professional standing depends on public perception, the asymmetry is particularly acute. The damage to reputation from a viral deepfake may take months or years to undo — if it can be undone at all — while the content itself can be created in minutes. This is why the legal response to AI-generated defamation must be designed for speed, and why Indian law has developed emergency relief mechanisms that can respond within 24 to 48 hours when invoked correctly.
The Indian Legal Framework for AI Defamation in 2026
India does not yet have a dedicated statute specifically governing AI-generated content or deepfakes. The legal framework is therefore constructed from existing statutes applied to new fact patterns. This is not unusual in the history of technology law — the IT Act 2000 itself was designed to address electronic commerce and was subsequently applied to social media, messaging platforms, and AI systems as those technologies emerged. The absence of specific AI legislation does not create a legal gap; it requires careful analysis of how existing provisions apply to synthetic media.
The foundational defamation remedy is IPC Section 499/500 — criminal defamation. The elements of Section 499 (publication, false imputation, intention or knowledge of harm to reputation) are all satisfied by AI-generated defamatory content. The fact that the content was created by an AI tool rather than written by a human does not affect the analysis: the person who directed the AI tool to generate and then published the resulting content is the author for the purposes of Section 499. The AI is the instrument; the human directing it bears the legal responsibility.
The Digital Personal Data Protection Act 2023 (DPDPA) adds a new dimension to AI defamation law. The DPDPA regulates the processing of personal data — including biometric data (facial images, voice recordings) — and requires consent from the data principal for most forms of processing. Creating a deepfake using a person's facial images or voice recordings constitutes processing of their biometric personal data without consent, in violation of the DPDPA. The Act provides for significant financial penalties against data fiduciaries (entities processing data) who violate consent requirements, and also creates a right of complaint to the Data Protection Board of India.
The IT Rules 2021 impose obligations on platforms — significant social media intermediaries — to expeditiously remove content that violates specified categories, including content that impersonates another person and sexually explicit content. MEITY has issued specific advisories on deepfakes, characterising them as a serious threat to individual rights and directing platforms to take proactive steps to detect and remove synthetic media. These advisories, while not having the force of statute, strengthen the legal notice framework by establishing that platforms are on constructive notice of the deepfake problem and cannot claim ignorance as a basis for delayed response.
What Is AI-Generated Defamation?
AI-generated defamation encompasses multiple forms: deepfake videos placing words in a real person's mouth, AI-generated text that mimics a person's voice or writing style to falsely attribute statements, morphed images combining a person's face with compromising imagery, and AI-fabricated screenshots of conversations that never occurred.
The threat is qualitatively different from traditional defamation because the content is convincing even to technically sophisticated viewers, can be produced at scale with minimal effort, and is difficult to immediately disprove without expert analysis. Each of these characteristics demands a legal response strategy designed for urgency, technical sophistication, and simultaneous action on multiple fronts.
The person who deploys an AI tool to create and publish defamatory synthetic media bears full legal responsibility for the resulting publication. The AI tool is the instrument — analogous to a typewriter or camera — and the human directing it is the author. This principle is well-established in Indian copyright law (where the human author of an AI-assisted work is recognised as the rights holder) and applies equally to liability for unlawful content.
Voice cloning presents a specific evidentiary challenge: audio content is often more persuasive than text and less visually verifiable than video. A cloned voice recording falsely attributing an admission, a threat, or a defamatory statement to a person can be shared via WhatsApp, embedded in podcast-style content, or played in contexts where visual authentication is impossible. Legal action for voice-clone defamation follows the same framework as for video deepfakes but requires specialised audio forensic expertise for the evidence phase.
RepuLex Legal Services
Deepfake Videos: Which IT Act Sections Apply?
Section 66E of the IT Act 2000 addresses violation of privacy through the intentional or knowing capture, publication, or transmission of the image of a private area of any person without their consent. While the section was originally drafted with surreptitious photography in mind, Indian courts have applied it to morphed and fabricated images that incorporate a person's likeness in contexts violating their bodily privacy — including sexual deepfakes. The section carries imprisonment of up to three years and a fine up to two lakh rupees.
Section 67 IT Act (publishing obscene material in electronic form) and Section 67A IT Act (publishing sexually explicit material in electronic form) apply to deepfake content that is obscene or sexually explicit. Section 67A carries imprisonment of up to five years on first conviction. These sections are the primary criminal provisions for non-consensual intimate deepfake content — a category that has seen significant growth globally and is increasingly affecting Indian individuals.
Section 66C IT Act (identity theft) applies where the deepfake is designed to impersonate the subject — for example, a video that purports to show a corporate executive making a statement about their company's financial position, or a politician making a policy announcement. The section addresses the use of another person's “unique identification feature” electronically, and a convincing audio-visual reproduction of a person's face and voice constitutes such a feature. Section 66D (cheating by personation using computer resources) applies where the impersonation is used to deceive another person to their detriment.
IPC Section 500 (punishment for criminal defamation) applies to deepfake content that makes a false imputation concerning a person with the intention or knowledge of harming their reputation. The publication of a deepfake video placing false words or conduct in a person's mouth satisfies all elements of Section 499/500. A criminal complaint before the Magistrate, or a police FIR under Section 500 combined with the applicable IT Act sections, is the appropriate criminal proceeding for deepfake defamation that does not involve intimate imagery.
The DPDPA 2023 and AI-Generated Content: What the New Data Law Changes
The Digital Personal Data Protection Act 2023 establishes a comprehensive framework for the protection of personal data in India. Its relevance to AI-generated defamation lies in the nature of the data used to create synthetic media. Deepfake videos require facial images and, for voice synthesis, audio recordings of the target person — both categories of biometric data that constitute “personal data” under the DPDPA. Processing biometric data without the data principal's consent is unlawful under the Act.
The DPDPA grants data principals (individuals) the right to erasure — the right to have their personal data deleted by any data fiduciary that has processed it without lawful authority. Where a deepfake creator has obtained or used facial images or voice recordings without the subject's consent, the subject can invoke the right to erasure not only against the creator but also against any platform that has retained and continued to process that biometric data. This right operates alongside — not instead of — the defamation remedy, providing an additional legal lever for content removal.
The Data Protection Board of India, established under the DPDPA, provides a dedicated forum for complaints about unlawful personal data processing. Filing a complaint with the Board citing the unlawful processing of biometric data in the creation of deepfake content creates an additional compliance pressure on the creator and any platforms that have hosted the content. The Board has the power to impose significant financial penalties — up to two hundred and fifty crore rupees for certain categories of violation — which are meaningful deterrents even for well-resourced bad actors.
The DPDPA also imposes obligations on “data fiduciaries” — entities that determine the purpose and means of data processing. An AI platform that enables deepfake creation using a person's images without their consent may itself qualify as a data fiduciary in violation of the Act. RepuLex advises clients on DPDPA complaints to the Data Protection Board as part of a comprehensive response to AI-generated defamation, running in parallel with IT Act notices and court proceedings.
Applicable Legal Provisions
Indian law provides multiple applicable provisions. Section 499/500 IPC (criminal defamation) applies because the element of "publication" and "false imputation" is satisfied regardless of whether the content was created by a human or an AI tool. Section 66E IT Act (violation of privacy by capturing and publishing images of a private area) applies to deepfake intimate imagery. Section 66C IT Act (identity theft by using electronic signature, password, or unique identification feature of another person) applies to AI impersonation.
Section 67 and 67A IT Act (publishing obscene and sexually explicit material electronically) apply to AI-generated sexual deepfakes. IT Act Section 43A (compensation for failure to protect data) may apply where the deepfake was created using biometric data obtained through data breaches. IPC Section 505 (statements conducting public mischief) applies where the deepfake is designed to incite public unrest.
The DPDPA 2023 adds consent-based and right-to-erasure grounds that apply specifically to AI-generated content created using a person's biometric data. These grounds are separate from and additional to the defamation and IT Act remedies, and they provide access to the Data Protection Board as an additional enforcement forum.
The selection of applicable provisions depends on the specific nature of the AI-generated content: intimate imagery, business defamation, political impersonation, voice cloning, and AI-written false articles each engage a somewhat different configuration of sections. Legal counsel experienced in AI defamation should conduct this analysis for each specific case before determining the appropriate combination of civil, criminal, and regulatory remedies.
Platform Obligations for Deepfake Content Under IT Rules 2021
Significant social media intermediaries — platforms with more than five million registered users in India, including Google, Meta (Instagram/Facebook/WhatsApp), YouTube, X (formerly Twitter), and others — are subject to heightened obligations under the IT Rules 2021. Rule 3(1)(b) requires these platforms to make reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, or sharing content that impersonates another person. Deepfakes, by definition, impersonate real individuals, and platforms are on constructive notice that such content violates their obligations.
MEITY advisories issued specifically addressing deepfakes have directed platforms to: implement technical measures to detect and flag AI-generated synthetic media; establish expedited review processes for complaints about deepfake content; and ensure that content identified as non-consensual synthetic media is removed within 24 hours of a valid complaint. These advisories have been issued under the IT Act and carry regulatory weight. A formal IT Act notice to a platform's Resident Grievance Officer citing a specific piece of deepfake content and referencing the MEITY advisories triggers the platform's obligation to act within the specified timeline.
Platforms that fail to act within the prescribed timeline lose their Section 79 safe harbour protection and become jointly liable for the defamatory content. This liability shift is a powerful commercial incentive for platforms to comply with formal legal notices. In practice, major platforms' Grievance Officers have established internal processes for handling deepfake complaints, and a correctly formatted IT Act notice from a practising advocate citing specific URLs or content identifiers produces consistent compliance outcomes.
For platforms that do not comply with IT Act notices, the escalation path is an application to the competent High Court for a mandatory order directing removal. Indian High Courts have consistently issued such orders against both Indian-operated and foreign-headquartered platforms. The contempt jurisdiction of the High Courts — which can result in significant penalties including imprisonment of responsible officers — provides a strong enforcement mechanism for court-ordered platform compliance.
Copyright as a Removal Tool
Original photographs, voice recordings, and videos of an individual are protected by copyright — typically owned by the photographer, videographer, or the individual themselves where a selfie or personal recording is concerned. AI-generated deepfakes that incorporate these protected source materials infringe copyright in the underlying work.
DMCA takedown notices citing copyright in the source photographs or videos used to create the deepfake are legally valid and are processed by platforms on 24-48 hour timelines. This is often the fastest removal route for AI-generated content on US-hosted platforms, running in parallel with IT Act notices for the Indian law dimension.
Copyright takedown operates independently of defamation law — a successful DMCA notice does not require establishing that the content is defamatory, only that it incorporates copyright-protected material without authorisation. This makes it a particularly useful tool where the defamation claim requires legal analysis that takes time, while the copyright claim can be assessed and filed rapidly.
Where the deepfake does not incorporate identifiable source material for which the subject holds copyright — for example, where it is constructed entirely from publicly available imagery — the copyright route may not be available, and the defamation and IT Act routes become the primary mechanisms. Legal counsel should assess copyright availability as part of the initial triage in any deepfake case.
Emergency Injunctions for Viral Deepfakes: The 48-Hour Court Track
An ex parte interim injunction — a court order obtained without notice to the defendant — is the most powerful immediate legal tool for viral deepfake content. "Ex parte" relief is granted where giving notice to the defendant before the order is made would defeat the purpose of the relief, or where the urgency is such that waiting for the defendant to appear would cause irreparable harm. Both conditions are readily satisfied in viral deepfake scenarios: notifying the defendant may cause them to take steps to distribute the content more widely, and the reputational harm from a viral video compounds with every additional hour of circulation.
Indian High Courts — particularly Delhi, Bombay, and Karnataka — have mechanisms for urgent applications that can be placed before a judge within 24 hours of filing in appropriate cases. The application must demonstrate: a prima facie case (the content is clearly defamatory or otherwise unlawful); irreparable harm (reputational damage that cannot be fully compensated by money); and balance of convenience (the harm from restraint to the defendant is less than the harm from non-restraint to the applicant). For deepfake defamation, all three elements are typically demonstrable with the evidence available on the first day.
The ex parte interim injunction, once granted, is served on the defendant and on the relevant platforms. Platform compliance with court orders is generally swift — major platforms have legal teams that process court orders and typically implement removal within hours of being served. The court order carries the force of the contempt jurisdiction: failure to comply can result in the platform or its officers being held in contempt of court, which is a serious sanction.
Following the ex parte order, the matter returns to court for an inter partes hearing — where the defendant has an opportunity to be heard. At this stage, the applicant must establish the case more fully. If the defendant cannot demonstrate that the deepfake is genuine or that its publication is protected by any exception, the interim injunction is confirmed as an ongoing restraint while the main case is tried. In practice, many deepfake defendants withdraw the content and settle rather than pursue contested litigation once an ex parte order has been granted.
Tracing the Creator of a Deepfake: Technical and Legal Investigation Methods
Identifying the creator of a deepfake requires a combination of technical forensic analysis and legal compulsion of platform disclosures. Technical methods include: metadata analysis of the deepfake file (video files retain creation metadata, rendering tool signatures, and sometimes platform upload markers that can identify the software and device used); reverse image search of the source photographs used in the deepfake (which may identify where those photographs were originally published and who had access to them); and forensic analysis of the deepfake for "fingerprints" of specific AI generation tools, which have identifiable artefacts in their output.
Platform disclosure through legal process is the most reliable identification route when technical analysis is inconclusive. A court order directing a platform to disclose the account registration details (email address, phone number, IP address at account creation) of the account that uploaded the deepfake is routinely granted by Indian High Courts where a prima facie case is established. IP address disclosure, combined with a subsequent order to the internet service provider to identify the subscriber associated with that IP, can identify the physical location and identity of the uploader.
Where the deepfake was created using an AI-as-a-service platform (such as a deepfake creation website), the platform itself may retain records of who used its service to create content involving specific individuals. Legal notices and court orders to these platforms — which are often incorporated in jurisdictions outside India — are part of the investigation strategy. Some jurisdictions have mutual legal assistance arrangements with India that enable enforcement of Indian court orders against foreign-incorporated platforms.
Cybercrime police in India — specifically the units operating under the Cyber Crime Investigation Cell of State Police forces — have investigative capabilities and international cooperation relationships that are available to supplement private legal action. Filing a police complaint and engaging the Cybercrime Cell as an additional investigation track is advisable in cases where the creator appears to be operating from outside India or through technical means designed to obscure identity.
Criminal Complaints for Deepfakes: Police FIR and Cybercrime Cell
A police FIR for AI-generated defamation should be filed under the applicable sections: IPC 499/500 (defamation), IT Act 66C (identity theft), IT Act 66E (privacy violation through image), IT Act 66D (personation by computer), and where the content is sexually explicit, IT Act 67/67A. The FIR must be supported by a complete evidence file: the deepfake content itself (downloaded and preserved before removal), the URLs at which it was hosted, timestamped screenshots showing its spread, and any information about who may have created it.
The Cyber Crime Investigation Cell (CCIC) of State Police forces is the appropriate unit for deepfake investigations. Unlike general police stations, Cyber Cells have technical equipment and trained personnel for digital forensics, platform coordination, and international cooperation for cases involving foreign-hosted platforms. Many State CCICs have established relationships with platforms' law enforcement response teams and can obtain account information and content preservation orders more quickly than the civil court process.
Case outcome expectations should be set realistically. Criminal investigation of deepfake cases is technically demanding, and outcomes depend heavily on whether the creator can be identified and whether the creator is within Indian jurisdiction. Where the creator is identified and within India, the criminal process — FIR, investigation, charge sheet, trial — follows the standard CrPC timeline, which is measured in years for contested matters. However, the FIR registration and investigation themselves create significant pressure: most deepfake creators are not professional criminals and the reality of facing criminal proceedings motivates early resolution through content removal and retraction.
Prosecution outcomes for deepfake defamation in India are still limited in number, reflecting the novelty of the problem rather than the inadequacy of the law. The legal provisions are clearly applicable. As Cyber Cells build expertise and case law develops, the deterrent effect of criminal prosecution for AI-generated defamation will strengthen. RepuLex coordinates police complaint preparation in AI defamation cases, ensuring that the FIR is filed under the optimal section combination and supported by evidence in a format that Cyber Cells can use effectively.
AI-Written False Articles and Defamation: The Legal Analysis
AI text generation tools — large language models such as GPT-family models, Gemini, and others — can produce false articles, social media posts, and review content that read as authentic human writing. When such content is published to make false defamatory imputations about a person or business, it is actionable defamation under IPC Section 499/500 regardless of whether a human or an AI tool produced the text. The question of authorship for liability purposes focuses on the person who prompted, edited, and published the content — not the tool used to generate it.
AI-written false articles create a specific challenge for attribution: the writing style does not carry human fingerprints that enable forensic authorship analysis. However, the platform on which the article is published — whether a website, a Medium post, a blog, or a social media account — retains account registration metadata that enables identification of the account holder through legal process. The mechanism for identifying the author of an AI-written defamatory article is therefore the same as for any other anonymously published defamatory content: court-ordered platform disclosure of account registration details.
The volume capability of AI text generation creates a distinct risk: a single motivated bad actor can produce and publish hundreds of variations of the same false allegation across multiple platforms simultaneously, creating an artificial impression of widespread consensus about the defamatory content. This tactic is particularly damaging for businesses and public figures because search engines may index multiple AI-generated articles supporting the same false claim, making the false narrative appear well-documented. The legal response must address both the individual publications (through IT Act notices and court orders) and the systematic pattern (which may support a more significant damages claim and criminal conspiracy argument).
AI-generated content that incorporates a real person's name or likeness in a defamatory context may also engage trade mark law (if the person's name is a registered trade mark), personality rights under Indian common law (developed through cases in the Delhi and Bombay High Courts), and the DPDPA where the content involves personal data. A comprehensive legal response to AI-written defamation should consider all applicable grounds, not only the core defamation claim.
MEITY Advisories and Platform Obligations
The Ministry of Electronics and Information Technology has issued explicit advisories to platforms requiring them to ensure that AI-generated content that impersonates individuals is removed promptly and that platforms implement mechanisms to label AI-generated content. These advisories are issued under the IT Act and create compliance obligations for platforms operating in India.
Platforms that knowingly host AI-generated defamatory content after receiving notice lose their Section 79 safe harbour protection and may be held liable as publishers. In the deepfake context, courts have shown willingness to pass urgent ex parte orders given the specific and severe nature of the harm.
The MEITY advisory framework, while not having full statutory force as regulations, establishes the regulatory expectation against which platform conduct is assessed. A platform that has received a MEITY advisory about deepfakes and has not implemented reasonable detection or removal mechanisms, and that then fails to act promptly on a specific deepfake complaint, faces a stronger argument that it is not entitled to Section 79 safe harbour protection.
Platforms have responded to MEITY advisories with varying degrees of compliance. Major platforms — Google, Meta, YouTube — have implemented advisory-driven changes to their deepfake detection and removal policies for India. Smaller platforms operating in India are less consistent. The regulatory enforcement of MEITY advisories is an evolving area, and RepuLex monitors regulatory developments to ensure its legal notice strategies reflect current compliance expectations.
What Indian Courts Have Said About Deepfake Defamation (2024–2026)
Indian courts have increasingly encountered deepfake defamation cases since 2024. The emerging jurisprudence reflects a clear judicial recognition of the severity of the harm and a willingness to grant urgent ex parte relief in appropriate cases. Delhi High Court has in multiple matters granted interim injunctions restraining the continued circulation of AI-generated defamatory content within 48 hours of urgent applications being filed, treating the matter on par with other urgent defamation applications but with an acknowledgment that the artificial nature of the content adds to the severity of harm.
Courts have applied the existing framework of defamation law, IT Act provisions, and privacy rights to deepfake facts without requiring new legislation. The principle that has emerged most clearly is that the technological novelty of AI generation does not change the legal character of the content — defamatory AI-generated content is defamation, privacy-violating synthetic imagery is a privacy violation, and identity-theft-by-deepfake is identity theft. The statutory provisions are applied to the new fact patterns by extending established interpretive principles.
On platform liability, courts have consistently held that platforms cannot shelter behind Section 79 safe harbour after receiving specific notice of deepfake content and failing to act. Several cases have resulted in mandatory orders directing platforms to remove deepfake content, with the courts explicitly noting that the irreversible nature of reputational harm from AI-generated defamation justifies treating the matter as urgent and granting relief without the standard notice periods. This judicial posture is consistent across the major High Courts that have addressed the issue.
The position as of 2026 is that the legal framework for deepfake defamation is functional but still developing. Damages jurisprudence — how courts will quantify harm from AI-generated defamation — is at an early stage. Criminal prosecution outcomes are limited. The right to be forgotten arguments in deepfake contexts have not yet been fully tested in the Supreme Court. RepuLex advises clients to engage now with legal remedies that are clearly available, while monitoring emerging case law for additional grounds as it develops.
What Victims Should Do Immediately
Document the content immediately with timestamped screenshots and video capture of the defamatory content in its original context. Do not wait for the content to gain wider distribution — every hour matters in a viral scenario. File a police complaint immediately under the applicable IPC and IT Act sections; the FIR number is a powerful tool for compelling platform compliance.
Engage specialist legal counsel to issue simultaneous IT Act notices (to the platform's Grievance Officer), DMCA notices (where US-hosted), and if necessary, file an urgent ex parte application in the competent High Court. The multi-track approach achieves the fastest possible removal while creating a legal record for subsequent civil and criminal proceedings.
Preserve technical evidence before it disappears. Download the deepfake file itself (not just a screenshot) where accessible. Note the exact URL, the account name and handle of the poster, the view count and share statistics (which affect the damages calculation), and any comments or interactions that indicate the spread. If the content has been shared to private channels such as WhatsApp or email, document those instances through witness evidence.
Consider a public factual correction — not a rebuttal of the false content (which amplifies it), but a simple factual statement that the content is AI-generated and false, published through authoritative channels (your own website, a verified social media account, a statement to relevant media). This does not substitute for legal action but provides a parallel record of your response and gives platforms additional context when processing your removal request.
RepuLex Editorial
Legal Researcher · IT Law & Defamation Practice
RepuLex's editorial team is composed of practising advocates and senior legal researchers specialising in IT Act 2000, defamation law, and digital content enforcement across Indian High Courts. All articles are reviewed for legal accuracy before publication. Nothing in this article constitutes legal advice — consult a qualified advocate for your specific situation.