A brand new class-action lawsuit, filed on Monday by three teenage women and their guardians, alleges that Elon Musk’s xAI created and distributed youngster sexual abuse materials that includes their faces and likenesses with its Grok AI tech.
“Their lives have been shattered by the devastating lack of privateness, dignity, and private security that the manufacturing and dissemination of this CSAM have prompted,” the submitting says. “xAI’s monetary acquire by way of the elevated use of its image- and video-making product got here at their expense and well-being.”
From December to early January, Grok allowed many AI and X social media customers to create AI-generated nonconsensual intimate pictures, typically generally known as deepfake porn. Experiences estimate that Grok customers made 4.4 million “undressed” or “nudified” pictures, 41% of the full variety of pictures created, over a interval of 9 days.
X, xAI and its security and youngster security divisions didn’t instantly reply to a request for remark.
The wave of “undressed” pictures stirred outrage around the globe. The European Fee shortly launched an investigation, whereas Malaysia and Indonesia banned X inside their borders. Some US authorities representatives referred to as on Apple and Google to take away the app from their app shops for violating their insurance policies, however no federal investigation into X or xAI has been opened. An analogous, separate class-action lawsuit was filed (PDF) by a South Carolina girl in late January.
The dehumanizing pattern highlighted simply how succesful trendy AI picture instruments are at creating content material that appears real looking. The brand new criticism compares Grok’s self-proclaimed “spicy AI” technology to the “darkish arts” with its ease of subjecting kids to “any pose, nevertheless sick, nevertheless fetishized, nevertheless illegal.”
“To the viewer, the ensuing video seems fully actual. For the kid, her figuring out options will now endlessly be hooked up to a video depicting her personal youngster sexual abuse,” the criticism reads.
The criticism says xAI is at fault as a result of it didn’t make use of industry-standard guardrails that may stop abusers from making this content material. It says xAI licensed use of its tech to third-party firms overseas, which offered subscriptions that led abusers to make youngster sexual abuse pictures that includes the faces and likenesses of the victims. The requests ran by way of xAI’s servers, which makes the corporate liable, the criticism argues.
The lawsuit was filed by three Jane Does, pseudonyms given to the teenagers to guard their identities. Jane Doe 1 was first alerted to the truth that abusive, AI-generated sexual materials of her was circulating on the net by an nameless Instagram message in early December. The submitting says she was instructed a couple of Discord server by the nameless Instagram consumer, the place the fabric was shared. That led Jane Doe 1 and her household, and finally legislation enforcement, to search out and arrest one perpetrator.
Ongoing investigations led the households of Jane Does 2 and three to be taught their kids’s pictures had been reworked with xAI tech into abusive materials.

