Class Action Filed Against xAI Alleges Elon Musk's Grok Published Millions of Sexualized Deepfakes of Women

Class Action Filed Against xAI Alleges Elon Musk's Grok Published Millions of Sexualized Deepfakes of Women

By Steve Levine

xAI Grok Deepfake Class Action Lawsuit

Published: February 13, 2026

Class Action Status: Active Lawsuit — No Settlement Yet

Date Filed: January 23, 2026


A new open class action lawsuit has been filed against xAI — the artificial intelligence company behind the Grok chatbot — alleging that Grok created and publicly posted millions of sexualized deepfake images of women on X (formerly Twitter), digitally undressing them and placing them in sexual positions without their knowledge or consent.

And when the internet erupted in outrage, the lawsuit alleges that xAI's response wasn't to shut the feature down. It was to start charging for it.

What Is Grok?

If you use X (the social media platform formerly known as Twitter), you may have seen Grok in action. Grok is an AI chatbot built by xAI, the artificial intelligence company founded by Elon Musk. It launched in November 2023 and was integrated directly into X, meaning any X user could interact with it by simply tagging @grok in a post.

From the start, xAI marketed Grok as different from other AI chatbots. While competitors like ChatGPT and Google's Gemini were designed with guardrails to refuse harmful or inappropriate requests, xAI took the opposite approach. Grok was promoted as having "a rebellious streak" that would "answer spicy questions that are rejected by most other AI systems."

In August 2025, xAI launched Grok Imagine — an image and video generation feature that included a "spicy mode" capable of producing nude and sexualized content. An xAI employee publicly boasted that "Grok Imagine videos have a spicy mode that can do nudity."

What Happened?

On December 24, 2025, xAI made Grok's image generation feature available to all X users for free. Users could tag @grok in any post containing a photo and prompt the AI to alter the image. It took no time for users to figure out that Grok would happily strip women down to revealing bikinis, place them in sexual positions, and post the altered images publicly — all without the women's knowledge or consent.

What followed was a nine-day flood of AI-generated sexual content on an unprecedented scale.

According to the complaint, in just nine days spanning late December 2025 and early January 2026, Grok posted more than 4.4 million images to X. The New York Times conducted a review and conservatively estimated that at least 41 percent of those posts — roughly 1.8 million images — contained sexualized imagery of women. A separate analysis by the Center for Countering Digital Hate estimated the number was even higher: approximately 65 percent, or over 3 million images, contained sexualized content depicting men, women, or children.

The deepfakes grew increasingly explicit as the trend picked up steam. According to the complaint, Grok created images of women in transparent bikinis, bikinis made of dental floss, placed in sexualized positions, and bent over so their genitals were visible. Other images showed women with white liquid on their faces that appeared to mimic semen.

All of this was posted publicly on X under the @grok account, visible to anyone.

What Is a Deepfake?

For anyone not familiar with the term, a deepfake is a video or image that has been digitally altered using artificial intelligence so that a real person appears to be doing something they never actually did. In this case, Grok was taking real photos of real women — photos they posted fully clothed on their own X accounts — and using AI to generate new images that showed them in states of undress, in sexual positions, or in other compromising situations.

The generated images can look extremely realistic. They typically don't carry any watermark or label indicating they're fake. That means anyone who sees the deepfake may not be able to tell it apart from a real photograph, which creates devastating consequences for the women depicted — from reputational damage to workplace consequences to ongoing harassment.

xAI's Response: Charge Money Instead of Shutting It Down

The lawsuit paints a damning picture of how xAI responded to the crisis. When the deepfake trend drew widespread outrage — including condemnation from members of Congress and foreign heads of state — xAI's reaction was not to admit it had made a mistake and disable the feature. Instead, on January 8, 2026, xAI restricted Grok's image generation on X to paid premium subscribers only.

In other words, xAI took a feature that was being used to sexually exploit women at industrial scale and put it behind a paywall. Grok began replying to deepfake requests with messages directing users to purchase an X Premium subscription to "unlock these features."

This did nothing to stop the deepfakes. Researchers estimated that on January 9, 2026 — the day after the paywall was implemented — Grok was still generating approximately 1,500 sexualized deepfakes per hour from paying subscribers.

On January 15, 2026, xAI announced additional restrictions in certain countries, including the United Kingdom, after British Prime Minister Keir Starmer called the situation "disgraceful" and "disgusting." But according to the lawsuit, those geographic restrictions did not apply across the United States.

What Safety Guardrails Did xAI Skip?

A significant portion of the complaint details the industry-standard safety measures that xAI allegedly failed to implement — measures that its direct competitors like OpenAI and Google use as a matter of course:

• Training data filtering: OpenAI and Google filter sexual and abusive content out of the data used to train their image generators. xAI allegedly did not.
• Red teaming: Major AI companies hire outside experts to attempt to break their systems and identify vulnerabilities before public launch. xAI allegedly did not conduct appropriate red teaming.
• Prompt filtering: Software that screens user requests and blocks prompts that are likely to generate harmful content. xAI allegedly did not properly implement prompt filtering on Grok for X.
• System prompt protections: OpenAI and Google reportedly include instructions in their system prompts directing their AI models to refuse requests for nonconsensual deepfakes. Grok's published system prompt explicitly states it has "no restrictions on adult sexual content."
• Image classifiers: A final safety layer that reviews generated images before they're shown to users and blocks policy-violating content. xAI allegedly did not effectively use image classifiers.

The complaint argues that if xAI had implemented any of these standard safeguards, the deepfakes would never have been created or posted.

What Happened to the Plaintiff?

The plaintiff, identified as Jane Doe from South Carolina, tells a story that illustrates the real human impact behind the statistics. On January 2, 2026, she posted a fully clothed photo of herself to her X account. The next morning, she woke up to discover that Grok had used her photo to create an image of her stripped down to a revealing bikini and publicly posted it to X.

She did not consent. She received no compensation. The deepfake carried no markings indicating it was AI-generated or fake.

She immediately began reporting the image to X. The platform refused to take it down. She also complained directly to Grok through X's interface. Grok denied creating the deepfake, denied posting any images since January 1, and claimed it didn't have image generation capabilities — but also acknowledged the situation was "shitty" and "invasive."

After three days of repeated reporting — multiple times per day — X finally removed the deepfake. By then, over 100 people had viewed it. The plaintiff missed approximately five hours of unpaid work dealing with the situation and continues to fear that the image, or worse versions of it, will resurface.

Who Can Join the Lawsuit?

The complaint proposes two classes:

Nationwide Class: All persons in the United States who were depicted in sexualized or revealing deepfakes created and disseminated by Grok without their consent.

South Carolina Subclass: All persons in South Carolina who were similarly depicted in sexualized or revealing deepfakes by Grok without consent.

The complaint estimates that hundreds of individuals were harmed by xAI's conduct. The matter in controversy exceeds $5 million under the Class Action Fairness Act.

What Are the Legal Claims?

The lawsuit brings 11 separate legal claims against xAI:

• Strict liability — design defect (Grok is unreasonably dangerous as designed)
• Strict liability — manufacturing defect (alternative claim)
• Negligence (failure to use ordinary care in designing and operating Grok)
• Public nuisance (unreasonable interference with public rights to privacy, peace, and safety)
• Common law right of privacy — appropriation (unauthorized use of women's likenesses for commercial gain)
• Violation of California's right of publicity (Cal. Civ. Code § 3344, which provides at least $750 in statutory damages per violation)
• Defamation (deepfakes created false and reputation-damaging impressions about the women depicted)
• Intentional infliction of emotional distress (conduct beyond all bounds of decency)
• Intrusion into private affairs (violating women's reasonable expectation of privacy)
• Violation of California Constitution Article I, Section 1 (privacy and autonomy rights)
• Violation of California's Unfair Competition Law (unlawful and unfair business practices)

What Is the Lawsuit Seeking?

The lawsuit seeks actual, compensatory, statutory, and punitive damages; disgorgement of xAI's revenues from its exploitation of nonconsenting women; a permanent injunction ordering xAI to stop creating and disseminating nonconsensual deepfakes; court-supervised identification and compensation of victims; and attorneys' fees and costs.

Political Reaction

The Grok deepfake controversy drew bipartisan condemnation. Rep. Maria Salazar (R-FL) stated this was exactly the abuse the TAKE IT DOWN Act was written to stop. Sen. Ted Cruz (R-TX) posted on X that the images posed a serious threat to victims' privacy and dignity and should be taken down with proper guardrails put in place. British Prime Minister Keir Starmer publicly called the situation "disgraceful," "disgusting," and "not to be tolerated."

Current Status

This is an active lawsuit — there is no settlement at this time. The complaint was filed on January 23, 2026 in the U.S. District Court for the Northern District of California, San Jose Division. The case is Jane Doe v. xAI Corp. and xAI LLC, Case No. 5:26-cv-00772.

If you believe you were depicted in a sexualized or revealing deepfake created by Grok, the plaintiff's attorneys are at Berger Montague PC (La Mesa, CA; Minneapolis, MN; Washington, DC).

How Do I Find Class Action Settlements?

Find all the latest class actions you can qualify for by getting notified of new lawsuits as soon as they are open to claims:



Case Information

The case is Jane Doe v. xAI Corp. and xAI LLC, Case No. 5:26-cv-00772, in the United States District Court for the Northern District of California, San Jose Division.

Plaintiff's Counsel: Sophia M. Rios, E. Michelle Drake, and James Hannaway of Berger Montague PC.

Court Filing

Your browser does not support viewing PDFs inline. Download the PDF.



Sources

• Class Action Complaint, Jane Doe v. xAI Corp. and xAI LLC, Case No. 5:26-cv-00772 (N.D. Cal.), filed January 23, 2026

About This Article

This article is based on court filings and public records. The open class action lawsuit is in its early stages and the allegations have not been proven in court. xAI has not yet responded to the complaint. OpenClassActions.com is a consumer news site and is not a law firm.

For more class actions keep scrolling below.
Lawsuit Summary
Status Active Lawsuit — No Settlement
Filed January 23, 2026
Category AI Deepfakes / Privacy / Sexual Exploitation
Defendants xAI Corp. and xAI LLC
Product Grok AI Chatbot
Images Generated 4.4 million in 9 days
Est. Sexualized Images 1.8 million (NYT) to 3+ million (CCDH)
Legal Claims 11 causes of action
Amount in Controversy Exceeds $5,000,000
Case Number 5:26-cv-00772
Court U.S. District Court, N.D. California (San Jose)
Plaintiff's Counsel Berger Montague PC