California’s Attorney General has initiated an investigation into the proliferation of sexually explicit AI deepfakes generated by Grok, the AI model developed by Elon Musk’s xAI.
Attorney General Rob Bonta announced the probe in a statement, noting: “The deluge of reports detailing the non-consensual, sexually explicit material that xAI has produced and disseminated online in recent weeks is deeply concerning.”
xAI, the company behind Grok, has previously stated that “anyone using or prompting Grok to create illegal content will face the same consequences as if they uploaded illegal content themselves.”
The California inquiry coincides with warnings from British Prime Minister Sir Keir Starmer regarding potential action against X.
In Wednesday’s statement, Bonta asserted: “This material, depicting women and children in nude and sexually explicit scenarios, has been leveraged to harass individuals across the internet.”
The Attorney General urged xAI to take immediate corrective measures.
California Governor Gavin Newsom, also a Democrat, posted on X Wednesday that xAI’s decision to “create and host a breeding ground for predators… is reprehensible.”
The BBC has reached out to xAI for comment.
On Wednesday, Musk posted on X stating he is “not aware of any naked underage images generated by Grok. Literally zero.”
“Obviously, Grok does not spontaneously generate images,” Musk wrote. “It does so only according to user requests.”
The tech entrepreneur, a Republican donor, has also suggested that criticism of X is politically motivated and uses the Grok controversy as a pretext for censorship.
In November, Wired magazine reported that tools from other AI companies, including OpenAI and Google, have also been utilized to digitally undress individuals.
Last week, three US Democratic senators requested that Apple and Google remove X and Grok from their respective app stores.
Within hours of the request, X limited its image generation tool, making it exclusive to paying subscribers.
X and Grok remain available on the Apple App Store and Google Play.
This development arises amid ongoing debate regarding the legal protections afforded to US tech companies concerning content posted by users on AI platforms.
Section 230 of the Communications Decency Act of 1996 grants online platforms legal immunity from liability for user-generated content.
However, Prof. James Grimmelmann of Cornell University contends that this law “only protects sites from liability for third-party content from users, not content the sites themselves produce.”
Grimmelmann stated that xAI is attempting to deflect responsibility for the imagery onto users, but expressed skepticism regarding the viability of this argument in court.
“This isn’t a case where users are making the images themselves and then sharing them on X,” he said.
In this case, “xAI itself is making the images. That’s outside of what Section 230 applies to,” he added.
Senator Ron Wyden of Oregon has argued that Section 230, which he co-authored, does not extend to AI-generated images, asserting that companies should be fully accountable for such content.
“I’m glad to see states like California step up to investigate Elon Musk’s horrific child sexual abuse material generator,” Wyden told the BBC on Wednesday.
Wyden is among the three Democratic senators who requested Apple and Google to remove X and Grok from their app stores.
The announcement of the investigation in California comes as the UK prepares legislation that would criminalize the creation of non-consensual intimate images.
The UK’s regulatory body, Ofcom, has also initiated an investigation into Grok.
If Ofcom determines that the platform has violated the law, it can impose fines of up to 10% of its worldwide revenue or £18 million, whichever is greater.
On Monday, Sir Keir Starmer informed Labour MPs that Musk’s social media platform X could forfeit the “right to self-regulate,” adding that “if X cannot control Grok, we will.”
Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.
A councillor says he wants Middlesbrough Council to advocate against the use of platform X.
Headteacher Jamie Barry says “we want to use the platform, but not at the expense of safety”.
Amazon is awaiting planning permission to build four data centre buildings in Oxfordshire.
The Digital Futures Lab looks at how emerging technologies can be used, including in research.
Guernsey’s Victim Support and Witness Service says the increase in victims is ‘concerning’.
