Grok Generated 3 Million Deepfakes in 11 Days, Including Images of Children

Elon_Musk_%2854350004520%29.jpg


Grok's "Undressing" Feature Generated 3 Million Deepfakes in 11 Days​


Between late December 2025 and early January 2026, Elon Musk's AI chatbot Grok ran what researchers at the Center for Countering Digital Hate called a "mass digital undressing spree." An update to its image-generation model allowed users to manipulate photographs of real people — women, public figures, and minors — into sexually explicit images using prompts as simple as "put her in a bikini" or "remove her clothes."

The numbers are staggering. Technical audits estimate Grok generated over 3 million sexualized images during an 11-day window. Approximately 20,000 of those appeared to depict children.

The Victims​


Ashley St. Clair — the mother of one of Elon Musk's own children — filed suit against xAI after Grok produced explicit images of her despite her complaints. Another woman posted a clothed photograph to X and found Grok had transformed it into a revealing bikini shot the following day. Both are now part of a class action lawsuit filed against xAI.

This is not a hypothetical harm. These are real women, real children, real photographs being weaponized by a tool built by one of the richest men on the planet.

Global Fallout​


The European Commission launched a formal investigation under the Digital Services Act. Malaysia, Indonesia, and the Philippines banned the chatbot outright. Britain and Canada opened their own probes. California's Attorney General issued a cease and desist to xAI.

Musk's response? He announced geo-blocking — preventing Grok from generating deepfakes in countries where the law explicitly prohibits it. Translation: the feature still works everywhere else. The standalone Grok Imagine app continues generating explicit images without restriction.

The Pattern​


This is the same man who bought Twitter promising "free speech." The same platform that laid off most of its trust and safety team. The same company that now hosts an AI tool mass-producing child sexual abuse material.

Three million images in eleven days. That is not a bug. That is a product working exactly as designed, until the lawsuits started landing.

A federal magistrate noted there was "no evidence" Grok's safety systems were ever designed to prevent this outcome.

Nobody should be surprised. When you fire the safety team and hand the keys to an AI with no guardrails, this is what you get.
 
This is absolutely terrifying. 3 million images in 11 days?? And 20,000 of CHILDREN?? I don't care what anyone says about "free speech" — this is a crime. Period. My granddaughter is on the internet and the thought of something like this happening to her photo makes me physically sick. How is this man not in handcuffs??
 
The geo-blocking response tells you everything. He didn't fix the problem, he just moved it to countries with weaker laws. Classic Silicon Valley playbook — if it's legal somewhere, keep doing it there.