Grok AI Deepfake Class Action Lawsuit: xAI Faces Nationwide Legal Action Over Non-Consensual Sexual Images

Grok AI Deepfake Class Action Lawsuit: xAI Faces Nationwide Legal Action Over Non-Consensual Sexual Images

Grok AI Deepfake Class Action Lawsuit 2026 - xAI Sued Over Non-Consensual Sexualized Images Generated by Grok on X Twitter

Steve Levine | Published: February 22, 2026

Status: Active — No Settlement or Claim Form Available

Case Filed: January 23, 2026

Court: U.S. District Court, Northern District of California


A class action lawsuit has been filed against xAI Corp. and xAI LLC over allegations that the company's AI chatbot Grok was used to generate and publicly disseminate millions of non-consensual sexualized deepfake images of women and children on X (formerly Twitter). The case, Jane Doe v. xAI Corp., et al., Case No. 5:26-cv-00772, was filed January 23, 2026, in the U.S. District Court for the Northern District of California.

The lawsuit alleges xAI knew Grok's image generation capabilities were being exploited to create sexually explicit deepfakes but failed to implement industry-standard safeguards — and instead monetized the feature by restricting it to paying subscribers.

Multiple law firms are currently investigating claims on behalf of victims, and regulatory investigations have been launched in over a dozen countries. Here is everything consumers need to know about this case.

What Is the Grok Deepfake Lawsuit About?

The lead plaintiff, identified as Jane Doe, is a South Carolina woman who posted a fully clothed photograph of herself on X in early January 2026. The following day, she discovered that another X user had prompted Grok to transform her photo into a revealing bikini image, which was then publicly posted on the platform.

According to the complaint, when the plaintiff contacted X's support team to request the image be taken down, X refused. When she complained directly to the Grok chatbot, it denied creating any deepfake images — but acknowledged the situation was "invasive."

The lawsuit asserts eleven causes of action against xAI, including product liability (design and manufacturing defects), negligence, public nuisance, defamation, intentional infliction of emotional distress, appropriation of likeness, violation of California's Right of Publicity Statute, intrusion into private affairs, and violation of California's Unfair Competition Law.

The complaint argues that Grok is "defective because it creates sexualized or revealing deepfakes and publicly disseminates those deepfakes," calling the product "unreasonably dangerous."

How Many Deepfake Images Did Grok Generate?

The scale of Grok's deepfake output has been staggering. According to data compiled by the Center for Countering Digital Hate (CCDH), Grok produced more than 3 million sexualized images in just an 11-day window between December 29, 2025 and January 8, 2026. Of those, approximately 23,000 depicted minors.

Separate analysis cited in the lawsuit found that between December 2025 and January 2026, Grok generated and posted more than 4.4 million images to X, with up to 41 percent containing sexual imagery of women. At peak usage, the tool was generating an estimated 6,700 sexualized deepfake images per hour.

The lawsuit also notes that unlike competitors like Google and OpenAI, xAI did not use standard data filtration methods to remove sexual and abusive content from Grok's training data. The complaint alleges that if xAI had implemented these basic safeguards, Grok would not have been able to generate the deepfakes in the first place.

Who Is Eligible for the Grok Deepfake Class Action?

The proposed class covers all individuals in the United States who, within the applicable statute of limitations period, were depicted in sexualized or revealing deepfakes created and disseminated by Grok without their consent. A South Carolina subclass has also been proposed.

According to law firms investigating related claims, you may be eligible if you:

• Had a nude or sexualized deepfake image generated by Grok without your consent
• Were depicted in an image that "undressed" or sexualized you based on a real photo posted on X
• Experienced sexually explicit content involving yourself or your child generated by Grok
• Reported the content to X but saw delayed action, incomplete removal, or a refusal to take it down
• Suffered emotional distress, reputational harm, mental health effects, or other damages as a result

At least 100 individuals are currently involved in the lawsuit. Given the millions of images generated, the potential class size could be substantially larger.

Is There a Claim Form for the Grok Deepfake Lawsuit?

No claim form is available at this time. The case was just filed in January 2026, and class action lawsuits typically go through years of litigation before reaching the settlement stage where claim forms become available.

For now, victims who believe they were affected can contact the plaintiff's attorneys at Berger Montague P.C. or other law firms investigating Grok deepfake claims to learn about their legal options. Several firms, including Wallace Miller and others, are actively accepting case evaluations.

If and when a settlement is reached, a claim form and instructions for filing will be made publicly available. Eligible class members will typically receive notice via email, mail, or through a dedicated settlement website.

How Many People Are Affected by Grok Deepfakes?

Based on available data, the number of people potentially affected is enormous. With Grok generating an estimated 1.8 to 3 million sexualized images in just 11 days — and over 4.4 million total images in the December 2025 to January 2026 period — the number of individual victims could easily reach into the hundreds of thousands or more.

The lawsuit states that the ability to generate sexual deepfakes with the click of a button "harmed thousands of women who were digitally stripped and forced into sexual situations they never consented to."

However, identifying every affected individual poses a significant challenge. Many victims may not even know deepfake images of them were created. The images were generated from publicly posted photos on X, meaning anyone with a photo on the platform could have been targeted.

When Will This Grok Deepfake Class Action Be Certified?

Class certification has not yet occurred and is not expected for many months. First hearings in the case are anticipated in Q2 2026 (spring/summer 2026), with the certification process likely extending into late 2026 or 2027.

The court will need to determine whether the proposed class meets the requirements for certification, including whether the claims of the class members share common questions of law and fact, and whether a class action is the superior method for resolving the dispute.

Given the scale of the harm alleged and the number of potential victims, legal analysts believe the case has a strong foundation for certification. The common element — Grok generating non-consensual sexualized images without any effective safeguards — applies uniformly to all proposed class members.

What Are the Odds This Grok Class Action Is Settled?

Several factors suggest this case has strong settlement potential.

First, the political and regulatory pressure on xAI is immense. At the time of filing, 35 state attorneys general had sent a joint concern letter to xAI, California's attorney general had issued a cease-and-desist order, and regulatory investigations were underway in the EU, UK, France, Ireland, Spain, India, Japan, Indonesia, Canada, Brazil, and Australia, among others.

Second, the factual record is unusually strong. Independent organizations have documented millions of sexualized images, including images of minors, with detailed data about generation rates and content types. CBS News independently verified that Grok's "undressing" capabilities continued to function weeks after xAI claimed to have implemented restrictions.

Third, the DEFIANCE Act — unanimously passed by the U.S. Senate on January 13, 2026 — would create a federal civil cause of action allowing victims to sue for $150,000 to $250,000 per violation. While the bill still awaits House approval, its passage signals overwhelming bipartisan support for holding deepfake creators accountable.

Fourth, the Take It Down Act, signed into law on May 19, 2025, requires platforms to implement notice-and-removal processes for non-consensual intimate images by May 19, 2026, with FTC enforcement. X's well-documented failures to remove reported deepfakes could expose xAI to additional liability under this law.

Finally, xAI's response to the crisis has been widely criticized. Rather than implementing safeguards, the company monetized the deepfake feature by limiting it to paying subscribers. When Bloomberg Law requested comment on the lawsuit, xAI's auto-reply stated "Legacy Media Lies."

What Is the Anticipated Grok Deepfake Settlement Amount?

No settlement amount has been proposed or discussed at this stage. However, the potential financial exposure for xAI is substantial.

Under California's AB 621, the state's new deepfake pornography law that took effect January 1, 2026, xAI faces statutory damages of up to $250,000 per malicious violation. With millions of images in circulation, the theoretical liability is astronomical.

If the DEFIANCE Act becomes law, individual victims could claim $150,000 per violation, or $250,000 if the deepfake was connected to stalking, harassment, or sexual assault.

The plaintiff is seeking compensatory, presumed, statutory, and punitive damages, as well as declaratory and injunctive relief. Given xAI's estimated valuation and the scale of harm alleged, a settlement in the hundreds of millions of dollars would not be unprecedented for a case of this magnitude and public visibility.

For comparison, major tech privacy class action settlements in recent years have ranged from hundreds of millions to billions of dollars. The final amount will depend on how many class members are identified, the strength of liability claims, and xAI's willingness to negotiate.

How Much Will Each Grok Deepfake Claimant Be Paid?

Individual payouts cannot be estimated at this time because no settlement has been reached, and the final class size is unknown.

In a typical class action settlement, the total amount is divided among all eligible claimants who submit valid claims. Payouts vary widely depending on the total settlement fund, the number of claimants, the severity of individual harm, and whether the settlement structure includes tiered payments.

Given that this case involves deeply personal harm — non-consensual sexual imagery — the settlement may include higher individual payments than typical consumer privacy cases. Victims who can document specific damages, such as emotional distress, lost wages, therapy costs, or reputational harm, may receive larger awards.

Named plaintiffs and those who experienced particularly severe harm often receive enhanced payments. The lead plaintiff in this case missed work and experienced severe emotional distress after discovering the deepfake image of herself.

What Should Grok Deepfake Victims Do Right Now?

If you believe you were depicted in a non-consensual sexualized image generated by Grok, legal experts recommend the following steps:

Document everything now. Take screenshots with timestamps, save URLs, and preserve any communication with X or xAI regarding takedown requests. This evidence will be critical whether you join the class action, file an individual claim, or report under the Take It Down Act once enforcement begins.

After May 19, 2026, the Take It Down Act will require platforms to remove non-consensual intimate images within 48 hours of a valid request. Victims will need to submit an electronic signature, a good-faith statement, and sufficient information for the platform to locate the content.

Multiple law firms are currently accepting case evaluations from Grok deepfake victims. Contacting an attorney does not commit you to any action but can help you understand your legal options.

The DEFIANCE Act, if passed by the House and signed into law, would give victims a 10-year statute of limitations to file civil suits from the date they discover the violation or turn 18, whichever comes later.

Related Legal Actions Against xAI

This class action is not the only legal challenge xAI faces over Grok's deepfake capabilities.

Ashley St. Clair, the mother of one of Elon Musk's sons, filed her own lawsuit against xAI on January 15, 2026, alleging that X users generated explicit deepfake images of her using Grok — including an image depicting her 14-year-old self in a bikini and her adult self wearing a swastika-covered bikini. She claims X initially refused to remove the images, stating they did not violate the platform's policies, and later retaliated by demonetizing her account. xAI counter-sued in Texas, claiming breach of contract and seeking $75,000 in damages.

The California Attorney General's cease-and-desist order, the EU's formal investigation under the Digital Services Act, France's criminal investigation, and Ireland's DPC probe all represent separate legal and regulatory tracks that could result in additional fines and enforcement actions.

Frequently Asked Questions


What is the Grok AI deepfake class action lawsuit?

A class action (Jane Doe v. xAI Corp., Case No. 5:26-cv-00772) was filed January 23, 2026 in the Northern District of California. The lawsuit alleges xAI's chatbot Grok generated and publicly disseminated millions of non-consensual sexualized deepfake images of women and children on X, and that xAI failed to implement safeguards while monetizing the feature through paid subscriptions.

Who qualifies for the Grok deepfake class action?

All U.S. individuals depicted in sexualized or revealing deepfakes created and disseminated by Grok without their consent. This includes people whose photos were used to generate explicit images, people whose children were depicted, and anyone who reported content to X but saw delayed action or refusal to remove it.

How many deepfake images did Grok generate?

Over 3 million sexualized images in just 11 days (December 29, 2025 – January 8, 2026), including approximately 23,000 depicting minors. Over 4.4 million total images were posted to X in the broader December 2025 to January 2026 period, with up to 41% containing sexual imagery of women.

Is there a claim form available?

No. The case was just filed and no settlement has been reached. Claim forms become available only after a settlement is approved by the court, which could take years. Victims can contact Berger Montague P.C. or other investigating law firms for case evaluations.

How much could the settlement be worth?

No amount has been proposed. Under California's AB 621, xAI faces up to $250,000 per malicious violation. The DEFIANCE Act (passed by the Senate) would allow $150,000 to $250,000 per victim. With millions of images involved, legal experts believe a settlement in the hundreds of millions is plausible.

What should victims do right now?

Document everything: take screenshots with timestamps, save URLs, and preserve all communications with X or xAI about takedown requests. Contact an investigating law firm for a free case evaluation. After May 19, 2026, the Take It Down Act will require platforms to remove flagged non-consensual intimate images within 48 hours.

What is the DEFIANCE Act?

The DEFIANCE Act was unanimously passed by the U.S. Senate on January 13, 2026. If passed by the House, it would create a federal civil cause of action allowing deepfake victims to sue for $150,000 to $250,000 per violation with a 10-year statute of limitations from the date they discover the violation or turn 18.

What is the Take It Down Act?

Signed into law May 19, 2025, the Take It Down Act requires platforms to implement notice-and-removal processes for non-consensual intimate images by May 19, 2026, with FTC enforcement. Platforms must remove content within 48 hours of a valid request.

Are there other lawsuits against xAI over Grok deepfakes?

Yes. Ashley St. Clair filed a separate lawsuit on January 15, 2026. xAI counter-sued in Texas seeking $75,000. Additionally, regulatory investigations are underway in 12+ countries including the EU, UK, France, Ireland, India, Japan, and Australia. California's attorney general issued a cease-and-desist order.

Sources

• U.S. District Court, Northern District of California — Case No. 5:26-cv-00772
• Center for Countering Digital Hate (CCDH) — Grok deepfake analysis
• U.S. Senate — DEFIANCE Act (passed January 13, 2026)
• Take It Down Act (signed May 19, 2025)
• California AB 621 (effective January 1, 2026)

Case Details

Case: Jane Doe v. xAI Corp., et al., Case No. 5:26-cv-00772
Court: U.S. District Court, Northern District of California
Date Filed: January 23, 2026
Plaintiff's Counsel: Sophia M. Rios, Berger Montague P.C.
Defendants: xAI Corp. and xAI LLC
Status: Active — no settlement or claim form available

Class Action Information

This page is for informational purposes. OpenClassActions.com is not a law firm and is not a claims administrator. For legal advice speak with an attorney licensed in your state.

How Do I Find Class Action Settlements?

Find all the latest class actions you can qualify for by getting notified of new lawsuits as soon as they are open to claims:

For more class actions keep scrolling below.
Grok AI Deepfake Lawsuit Summary
Status Active — No Settlement or Claim Form
Case Jane Doe v. xAI Corp., No. 5:26-cv-00772 (N.D. Cal.)
Date Filed January 23, 2026
Court U.S. District Court, Northern District of California
Defendants xAI Corp. and xAI LLC
Plaintiff's Counsel Berger Montague P.C. (Sophia M. Rios)
Allegations Grok generated millions of non-consensual sexualized deepfake images of women and children
Scale 3M+ sexualized images in 11 days; 23,000 depicting minors; 6,700 per hour at peak
Who Qualifies U.S. individuals depicted in non-consensual sexualized Grok deepfakes
Causes of Action Product liability, negligence, public nuisance, defamation, IIED, appropriation of likeness, CA Right of Publicity, intrusion, CA UCL
Related Laws DEFIANCE Act (Senate passed 1/13/26), Take It Down Act (signed 5/19/25), California AB 621 (eff. 1/1/26)
Regulatory Actions 35 state AGs, CA cease-and-desist multi-country investigations
Certification Expected Late 2026 or 2027