Consumer advocates demand probe into xAI's Grok Imagine platform

Image (c) ConsumerAffairs. Consumer groups urge an investigation into xAI's Grok Imagine for enabling non-consensual intimate imagery.

'Spicy' feature accused of enabling fake nude images and videos of both celebrities and private citizens

  • Consumer and privacy groups file formal complaints urging regulators to investigate xAI’s Grok Imagine platform.

  • “Spicy” feature accused of enabling NCII, including fake nude images and videos of both celebrities and private citizens.

  • Coalition of 16 advocacy organizations warns the tool poses urgent risks to survivors, children, and vulnerable communities.


A coalition of consumer protection, privacy, and digital rights advocates has filed a sweeping request for investigation into xAI, accusing the company of promoting and enabling non-consensual intimate imagery (NCII) through its Grok Imagine platform.

The filing, led by the Consumer Federation of America (CFA), was submitted to attorneys general in all 50 states, the District of Columbia, 93 U.S. attorneys’ offices, and the Federal Trade Commission. It asks regulators to crack down on what the groups call the “promotion, creation, and facilitation” of illegal sexual exploitation via Grok Imagine’s “spicy” feature, which allows users to generate nude videos from AI-produced images.

“Exploitative, unfair, and lazy”

“This feature is exploitative, unfair, and lazy,” said Ben Winters, CFA’s director of AI and privacy in a news release. “It’s a crystal-clear representation of why AI built off of people’s data without knowledge or consent in the hands of an unaccountable billionaire is a legal and ethical nightmare. This feature endangers everyone, with an acute and urgent risk for domestic violence survivors, kids, and more.”

Advocates argue the creation of NCII—whether involving public figures or private citizens—poses devastating risks, from extortion and blackmail to long-lasting personal and professional harm.

Broad coalition of support

The complaint was joined by 15 organizations, including the Center for Economic Justice, Common Sense Media, the Electronic Privacy Information Center, Fairplay, the National Consumers League, the National Center on Sexual Exploitation, and the Tech Oversight Project.

The groups said swift enforcement is needed not only to protect individuals but to draw clear boundaries around what constitutes acceptable deployment of artificial intelligence.

“The creation of NCII is unacceptable, illegal, and damaging enough, but embedding it into a consumer-facing AI platform risks normalizing abuse at scale,” the coalition wrote.


Stay informed

Sign up for The Daily Consumer

Get the latest on recalls, scams, lawsuits, and more

    By entering your email, you agree to sign up for consumer news, tips and giveaways from ConsumerAffairs. Unsubscribe at any time.

    Thanks for subscribing.

    You have successfully subscribed to our newsletter! Enjoy reading our tips and recommendations.

    Was this article helpful?

    Share your experience about ConsumerAffairs

    Was this article helpful?

    Share your experience about ConsumerAffairs