“`html
Pictured here is the author on a pier in Dorset this past summer.
Notably, two of these images were generated using Grok, the artificial intelligence tool developed by Elon Musk and made available for public use.
The AI’s output is strikingly convincing. While the author has never worn the pictured yellow ski suit or red and blue jacket—the central photograph is the original—the generated images raise concerns about proving authenticity in the face of such realistic forgeries.
Grok has come under scrutiny for its capacity to generate unauthorized and sexually explicit images of women.
Reports indicate that the tool produced images of individuals in bikinis, and in some cases, more explicit depictions, based on user prompts, with results shared publicly on the social media platform X.
Furthermore, evidence suggests the AI has also been used to generate sexualized images of children.
In response to widespread condemnation, Ofcom, the UK’s online regulator, has announced an urgent investigation into whether Grok has violated British online safety laws.
The government has urged Ofcom to expedite its inquiry.
However, to maintain credibility and avoid accusations of stifling free speech—a common criticism leveled against the Online Safety Act—Ofcom must conduct a thorough and impartial investigation.
Elon Musk’s recent silence on the matter suggests an acknowledgement of the gravity of the situation.
Yet, he has also accused the British government of seeking “any excuse” for censorship.
Critics argue this defense is inadequate in this instance.
“AI undressing people in photos isn’t free speech – it’s abuse,” argues Ed Newton Rex, a prominent campaigner.
“When every photo a woman posts of themselves on X immediately attracts public replies in which they’ve been stripped down to a bikini, something has gone very, very wrong.”
Given the complexities, Ofcom’s investigation is likely to be protracted, requiring extensive deliberation and potentially testing the patience of both policymakers and the public.
This investigation represents a pivotal moment not only for Britain’s Online Safety Act but also for the regulator itself.
Failure is not an option.
Ofcom has faced prior criticism for lacking enforcement power. The Online Safety Act, years in development, only fully took effect last year.
To date, six fines have been issued, with the largest being £1m, and only one has been paid.
Furthermore, the Online Safety Act does not explicitly address AI products.
While sharing intimate, non-consensual images, including deepfakes, is currently illegal, requesting an AI tool to generate such images is not.
This is set to change. The government will enact legislation this week to criminalize the creation of these images.
The UK also plans to amend existing legislation currently under consideration in Parliament, making it illegal for companies to provide the tools used to create them.
These provisions, predating the Online Safety Act, are part of the Data (Use and Access) Act.
Despite numerous government announcements, enforcement has been delayed until now.
Today’s announcement reflects a government determined to counter criticism that regulation is too slow, showcasing its ability to act decisively when necessary.
The implications extend beyond Grok.
The new law could pose challenges for owners of other AI tools capable of generating similar images.
Enforcement remains a key question. Grok’s actions came to light because its output was publicly shared on X.
If a tool is used privately, with users circumventing safeguards and sharing content only with consenting individuals, how will violations be detected?
If X is found to have violated the law, Ofcom could impose a fine of up to 10% of its worldwide revenue or £18m, whichever is greater.
Ofcom could even seek to block Grok or X in the UK, a move that could have significant political repercussions.
At the AI Summit in Paris last year, Vice President JD Vance cautioned against foreign countries regulating US tech companies.
His audience, including numerous world leaders, remained silent.
Tech firms maintain substantial influence within the White House and have invested heavily in AI infrastructure in the UK.
Can the country afford to alienate them?
Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.
Presenter Jess Davies says the UK government has been “dragging its feet” when creating AI deepfake laws.
The MPs are leaving X amid reports its AI Grok feature is being used to make sexualised images.
Analysts say the deal is likely to be welcomed by consumers – but reflects Apple’s failure to develop its own AI tools.
It is currently illegal to share deepfakes, but the law against creating them has not yet come into force.
The Northern Ireland assembly member Cara Hunter brands the Elon Musk-owned social media site a “disgrace”.
“`
