Teens Sue xAI Over Grok's Sexually Explicit Deepfake Images

Three Tennessee teenagers have taken Elon Musk's artificial intelligence company to federal court, alleging that xAI's chatbot Grok was used to create sexually explicit deepfake images and videos of them without their knowledge. The lawsuit, filed on March 16, 2026, in the U.S. District Court for the Northern District of California (Case No. 5:26-cv-02246), is the latest in a wave of legal actions triggered by Grok's controversial image generation features.
The case highlights a growing collision between AI capabilities and existing laws designed to protect against sexual exploitation, particularly of minors.
What the Lawsuit Alleges
The three plaintiffs, identified as Jane Does 1, 2, and 3, are all from Tennessee. Two were under 18 at the time the images were created. Jane Doe 1 is now a legal adult but was a minor when the source images were taken.

According to the 44-page complaint, Jane Doe 1 discovered the deepfake material after receiving an anonymous Instagram message directing her to a Discord server. There, she found her high school yearbook photo and a homecoming dance picture had been altered to show her in sexually explicit situations and full nudity.
The Discord server contained similar altered images of at least 18 other girls who were minors, many from the same school. The material was also shared on Telegram and the file-sharing platform Mega, with victims' first names and school name attached.
The complaint names xAI Corp. and xAI LLC as defendants. Lawyers from Lieff Cabraser Heimann & Bernstein LLP and Baehr-Jones Law PC, led by former federal prosecutor Vanessa Baehr-Jones, represent the plaintiffs.
"xAI chose to profit off the sexual predation of real people, including children, despite knowing full well the consequences of creating such a dangerous product," Baehr-Jones stated.
The Legal Claims
The lawsuit brings approximately 13 causes of action, including claims under:
- Masha's Law (18 U.S.C. 2255): A federal civil remedy for child sexual exploitation victims, providing minimum statutory damages of $150,000 per violation
- Trafficking Victims Protection Act (TVPA): Federal human trafficking statutes
- California state law claims: Including negligence, intentional infliction of emotional distress, public nuisance, and California's Unfair Competition Law
The lawsuit seeks class certification covering all U.S. residents who had real images of themselves as minors altered by Grok into sexualized content. The estimated class size reaches into the thousands.
The perpetrator who created the Discord server was arrested in late December 2025 and is subject to a separate police investigation. That investigation discovered hundreds of AI-generated child sexual abuse images on his devices.
How Grok's Image Features Created a Crisis
Grok launched in 2023 as xAI's AI chatbot, hosted on Musk's social media platform X. The controversy stems from image generation features released in 2025.
In August 2025, xAI released Grok Imagine for paid subscribers, which included a "Spicy Mode" toggle described as enabling "edgier, more visually daring narratives." By late December 2025, Musk announced that Grok could edit any image uploaded to X with a single click.
The feature quickly went viral for the wrong reasons. Users began requesting edits to women's photos, asking Grok to remove clothing or place subjects in sexual situations. Controlled testing by Reuters found Grok bypassed its own safety filters in 45 out of 55 attempts to generate sexualized images of real people.
A 24-hour analysis conducted in early January 2026 found Grok generating approximately 6,700 sexually suggestive or "nudifying" deepfakes per hour.
The CCDH Report: 3 Million Images in 11 Days
The Center for Countering Digital Hate (CCDH) published its findings on January 15, 2026, after studying Grok's output over 11 days from December 29, 2025, through January 8, 2026.
Key findings from the report:
| Metric | Finding |
|---|---|
| Total sexualized images generated | Estimated 3 million |
| Images depicting children | Approximately 23,000 photorealistic |
| Animated/cartoon images of children | Approximately 9,900 |
| Average generation rate | 190 sexualized images per minute |
| Rate for child images | One every 41 seconds |
| Child images still accessible on X (Jan 15) | 29% of those identified in the sample |
The CCDH analyzed a random sample of 20,000 images from over 4.6 million total Grok-generated posts during that period. Named public figures depicted included Taylor Swift, Selena Gomez, Billie Eilish, and Kamala Harris.
Federal Laws Addressing AI Deepfakes
Several federal laws now apply to AI-generated non-consensual intimate imagery.
TAKE IT DOWN Act
The TAKE IT DOWN Act (S.146) was signed into law on May 19, 2025, with bipartisan sponsorship from Senators Ted Cruz and Amy Klobuchar. The law:
- Criminalizes the non-consensual publication of intimate images, including AI-generated "digital forgeries"
- Imposes penalties of up to 2 years imprisonment for adult depictions and up to 3 years for depictions of minors
- Requires covered platforms to establish notice-and-removal processes, with a 48-hour removal deadline after notification
- Grants enforcement authority to the Federal Trade Commission
- Sets a compliance deadline for platforms of May 19, 2026
DEFIANCE Act
The DEFIANCE Act (S.1837) passed the U.S. Senate unanimously on January 13, 2026, and is pending in the House as of March 2026. It would create a federal civil right of action for deepfake victims with statutory damages up to $150,000 per violation ($250,000 if linked to sexual assault, stalking, or harassment). The 10-year statute of limitations runs from the later of discovery or the victim's 18th birthday.
Section 230 Implications
AI-generated content complicates the traditional Section 230 shield that protects platforms from liability for user-generated content. The Deepfake Liability Act (H.R.6334), pending in the House, would amend the definition of "information content provider" to include content created through "solicitation, encouragement, or the use of a generative model," directly targeting Section 230 immunity for AI platforms.
California State Laws
California has enacted multiple laws targeting AI deepfakes:
- AB 1831 (effective January 1, 2025): Criminalizes creation, distribution, and possession of AI-generated child sexual abuse material
- AB 602: Creates a civil cause of action for victims of non-consensual deepfake pornography
- SB 926: Criminalizes creation or distribution of non-consensual AI-generated intimate images
- SB 981: Requires social media platforms to provide reporting mechanisms for non-consensual deepfakes
- AB 621 (effective January 1, 2026): Allows district attorneys to bring cases against companies that "recklessly aid and abet" deepfake distribution, with penalties up to $250,000 per violation
On January 16, 2026, California Attorney General Rob Bonta issued a cease and desist letter to xAI, demanding it halt the creation and distribution of non-consensual intimate imagery and child sexual abuse material.
As of mid-2025, 47 states have enacted laws targeting AI-generated synthetic media, and 45 states have laws specifically criminalizing AI-generated child sexual abuse material.
Global Regulatory Response
The Grok deepfake crisis triggered investigations across multiple countries and regulatory bodies.
United Kingdom
Ofcom launched a formal investigation into X on January 12, 2026, under the Online Safety Act 2023. Potential penalties include fines of up to 18 million GBP or 10% of qualifying worldwide revenue.
The UK Information Commissioner's Office (ICO) opened a separate investigation on February 3, 2026, examining whether xAI processed personal data lawfully under UK GDPR. New offenses under the Data (Use and Access) Act 2025, criminalizing the creation of non-consensual intimate AI images, took effect on February 6, 2026.
European Union
The European Commission opened a formal Digital Services Act investigation on January 26, 2026, examining whether X properly assessed and mitigated risks from deploying Grok. The Commission had previously ordered X to retain all internal documents related to Grok until end of 2026.
France
French prosecutors launched their own investigation in January 2026. In March 2026, they announced suspicions that Musk encouraged the deepfake controversy to artificially inflate X's value ahead of a planned June 2026 IPO. Prosecutors contacted the U.S. Department of Justice and SEC with their concerns.
Other Countries
Indonesia became the first country to temporarily block Grok access on January 10, 2026. Malaysia followed with temporary restrictions on January 11. India's Ministry of Electronics and IT issued notices to X under the Information Technology Act. Canada's Privacy Commissioner expanded an existing investigation, and Japan summoned X Corp representatives.
Other Lawsuits Against xAI
The Tennessee teen case is not the first legal challenge to Grok's image features.
Jane Doe v. xAI (January 23, 2026): A South Carolina woman filed a class action in the Northern District of California after Grok posted an AI-generated bikini image of her without consent. X initially refused removal for three days despite multiple reports.
Ashley St. Clair v. xAI (January 15, 2026): A political influencer and mother of one of Musk's children sued after users prompted Grok to create sexually explicit images using a photo of her at age 14. After she filed suit, xAI countersued in Texas, claiming she violated its terms of service, and her X account was demonetized.
Grok's Current Status
After weeks of regulatory pressure, xAI implemented restrictions in mid-January 2026:
- Image generation and editing are limited to paid subscribers (SuperGrok at $30/month or X Premium+ at $16/month)
- Grok can no longer create or edit images of real individuals into revealing clothing
- Age verification (18+) is required
- "Spicy Mode" has been redefined to affect only language style and humor, not sexual content
- Grok 4 includes a "Moral Reasoner" layer designed to detect sexual intent
Legal experts have noted potential complexities in the Tennessee teen case. Stanford Internet Observatory researcher Riana Pfefferkorn observed that the perpetrator used a third-party application licensing Grok's technology rather than Grok directly, which may complicate the claim that xAI is directly responsible.
The case is ongoing as of March 2026.
This article provides general legal information about AI deepfake laws and pending litigation. Laws are evolving rapidly in this area. Consult an attorney licensed in the relevant jurisdiction for advice specific to your situation.
Sources and References
- TAKE IT DOWN Act (S.146)(congress.gov).gov
- DEFIANCE Act (S.1837)(congress.gov).gov
- Masha's Law (18 U.S.C. 2255)(law.cornell.edu)
- Deepfake Liability Act (H.R.6334)(congress.gov).gov
- California AG Cease and Desist to xAI(oag.ca.gov).gov
- California AB 1831 - AI-Enabled Child Sexual Abuse(asmdc.org).gov
- CCDH Report: Grok Floods X with Sexualized Images(counterhate.com)
- Ofcom Investigation into X over Grok Imagery(ofcom.org.uk).gov
- ICO Investigation into Grok(ico.org.uk).gov
- European Commission DSA Investigation(ec.europa.eu).gov
- State Intimate Deepfake Legislation Tracker(citizen.org)
- State Laws Criminalizing AI-Generated CSAM(enoughabuse.org)