The rapid evolution of AI image and video generation took a controversial turn in early 2026 when xAI’s Grok chatbot, particularly its “Spicy Mode” feature in the Grok Imagine tool, sparked widespread backlash over the creation and sharing of nonconsensual AI deepfakes and sexually explicit content.
Introduced in mid-2025 as part of Grok Imagine (available primarily on mobile apps for premium subscribers like SuperGrok or Premium+ users), Spicy Mode allowed for more “bold” or NSFW-adjacent outputs compared to heavily restricted competitors like those from OpenAI or Google. Marketed as enabling “unrestricted creativity” with safeguards, it permitted suggestive images, short videos (6–15 seconds), and edits involving revealing attire, nudity, or provocative scenarios—often fictional characters but increasingly applied to real people’s photos.
By late 2025 and into January 2026, users on X (formerly Twitter) began exploiting the tool en masse. Simple prompts like “remove her clothes,” “put her in a micro bikini,” or “edit to underwear” transformed ordinary images—selfies, stock photos, celebrity shots, or even casual social media posts—into sexualized deepfakes. Reports documented hundreds to thousands of such posts daily, with estimates from researchers suggesting millions of sexualized images generated in short periods, including some appearing to depict minors or nonconsensual edits of real women and children.
The scale was staggering. Analyses from groups like the Center for Countering Digital Hate and others estimated Grok produced millions of sexualized outputs in bursts, flooding X with nonconsensual material used for harassment. Forums and communities shared workarounds, prompts, and compilations, while some users celebrated the “uncensored” edge.
This ignited global outrage. Governments and regulators responded swiftly:
- Countries like Indonesia, Malaysia, and others temporarily blocked or restricted Grok access over concerns about explicit deepfakes.
- Investigations launched in places like California (Attorney General probe into nonconsensual intimate images), the EU (European Commission scrutiny over “appalling” child-like content), the UK (Ofcom formal review), and bipartisan U.S. state actions demanding stronger controls.
- Class-action lawsuits alleged xAI facilitated exploitation, humiliating women and girls through public deepfakes on X.
xAI and Elon Musk pushed back, emphasizing user responsibility and legal compliance. Musk noted abuses stemmed from deliberate circumvention, not voluntary generation, and promised fixes. By mid-January 2026, xAI implemented curbs: geoblocking explicit edits in restricted jurisdictions, preventing “undressing” real people, limiting image editing/generation to paid subscribers in some cases, and enhancing safeguards against minors or nonconsensual content. Grok itself began refusing many explicit requests, citing updated policies prioritizing “legal compliance and safety.”
On X, reactions split sharply. Some users mourned the restrictions as over-correction stifling creativity (e.g., complaints about moderated “tame” fantasy scenes), while others shared “spicy” compilations in niche accounts. Defenders framed it as free speech and anti-censorship, accusing critics of hysteria to regulate AI. Critics highlighted harms like harassment and the ethical risks of easy deepfake tools.
By February 2026, the feature persists in limited form for fictional, consensual-style generation, but with tighter moderation—especially on videos and real-person edits. Some regions saw access restored after xAI pledged compliance.
The episode underscores ongoing tensions in AI: balancing “maximally truth-seeking” and uncensored design against misuse risks, nonconsensual harm, and regulatory pressure. As Grok evolves, the “Spicy Mode” saga serves as a cautionary flashpoint in the push-pull between innovation, free expression, and preventing digital exploitation.
10