Wed. Jan 7th, 2026
Ofcom Queries X Over Reports of Grok AI Generating Sexualized Child Images

Ofcom has initiated “urgent contact” with xAI, Elon Musk’s AI company, following reports that its Grok AI tool is allegedly capable of generating “sexualized images of children” and digitally undressing women.

A spokesperson for the regulatory body stated that it is also investigating concerns surrounding Grok’s purported production of “undressed images” of individuals.

The BBC has reviewed multiple instances on the X platform where users prompted the chatbot to alter existing images, depicting women in bikinis without their consent or placing them in sexually suggestive scenarios.

X has not yet responded to requests for comment. On Sunday, the platform issued a warning to its user base, cautioning against utilizing Grok to generate illicit content, including child sexual abuse material.

Elon Musk posted a statement indicating that individuals who instruct the AI to generate illegal content will “suffer the same consequences” as if they had directly uploaded the material themselves.

XAI’s official acceptable use policy explicitly prohibits “depicting likenesses of persons in a pornographic manner.” However, reports indicate that users are exploiting Grok to digitally undress individuals without proper authorization.

Images of Catherine, Princess of Wales, were among those reportedly altered by Grok users on X, depicting her without clothing.

The BBC has reached out to Kensington Palace for comment.

The European Commission has stated it is “seriously looking into this matter” and authorities in France, Malaysia and India were reportedly assessing the situation.

The UK’s Internet Watch Foundation confirmed to the BBC that it has received reports from the public regarding images generated by Grok on X.

However, the organization stated that it has not yet identified images that surpass the UK’s legal threshold to be classified as child sexual abuse imagery.

Grok functions as a complimentary virtual assistant, with optional premium features, responding to prompts from X users who tag it in their posts.

Samantha Smith, a journalist who discovered that users had employed the AI to create images of her in a bikini, shared with the BBC’s PM program on Friday that she felt “dehumanized and reduced into a sexual stereotype.”

“While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me,” she said.

Under the Online Safety Act (OSA), Ofcom defines the creation or distribution of intimate or sexually explicit images, including AI-generated “deepfakes,” of an individual without their explicit consent as illegal.

Technology companies are also obligated to take “appropriate steps” to mitigate the risk of UK users encountering such content and to remove it “quickly” upon notification.

Dame Chi Onwurah, chair of the Science, Innovation and Technology Committee, described the reports as “deeply disturbing.”

She stated that the Committee found the OSA to be “woefully inadequate” and characterized it as “a shocking example of how UK citizens are left unprotected whilst social media companies act with impunity.”

She urged the government to adopt the Committee’s recommendations, compelling social media platforms “to take greater responsibility for their content.”

European Commission spokesperson Thomas Regnier said on Monday it was aware of posts made by Grok “showing explicit sexual content,” as well as “some output generated with childlike images”.

“This is illegal,” he said, also calling it “appalling” and “disgusting”.

“This is how we see it, and this has no place in Europe,” he said.

Regnier said X was “well aware” the EU was “very serious” about enforcing its rules for digital platforms – having handed X a €120m (£104m) fine in December for breaching its Digital Services Act.

A Home Office spokesperson said it was legislating to ban nudification tools, and under a new criminal offence, anyone who supplied such tech would “face a prison sentence and substantial fines”.

Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.

Some major retailers and independent stores have introduced AI body scans, CCTV or facial recognition equipment to identify crimes like shoplifting.

Northumberland County Council is testing how AI can speed up flood planning decisions.

Processing drone data can take hours, but an AI tool is doing the job within a matter a seconds.

We asked several experts to predict the technology we’ll be using by 2050

It marks the first time the Chinese firm has outpaced its American rival for annual sales.