Regulation
UK Criminalizes AI Nudes After Grok Scandal Forces Emergency Action
The government fast-tracks deepfake laws as Ofcom investigates X for hosting AI-generated sexual images, including those of minors.

The government fast-tracks deepfake laws as Ofcom investigates X for hosting AI-generated sexual images, including those of minors.
Last week, Technology Secretary Liz Kendall stood before Parliament to announce what had been planned as routine enforcement of existing legislation. Instead, she was responding to an emergency: reports that Elon Musk's Grok chatbot on X had been generating explicit deepfakes of real people, including children. Criminal penalties for creating non-consensual intimate AI images will take effect in February, months ahead of schedule.
The acceleration marks a rare instance of AI regulation catching up to deployment speed. The Data (Use and Access) Act passed last year already contained provisions against deepfake creation, but the Grok incident transformed implementation from bureaucratic process to political priority. Ofcom has launched a formal investigation into X to determine if the platform violated the Online Safety Act by allowing proliferation of AI-generated pornography.
The new enforcement regime targets two distinct behaviors. Creating non-consensual intimate images becomes a criminal offense under the Data Act, while the upcoming Crime and Policing Bill will ban "nudification" tools—applications designed to undress people in existing photos. Legal expert Professor Clare McGlynn, who helped draft the legislation, emphasized that the law also prohibits requesting others to generate such images, addressing gaps in previous "revenge porn" laws.
Prime Minister Keir Starmer framed the crackdown in terms of consent rather than content moderation. "Free speech does not justify violating consent," he stated. The distinction matters legally—the UK isn't banning AI image generation broadly, but criminalizing its use to create sexual content of real people without permission.
X faces potential fines up to £18 million or 10% of global revenue if Ofcom's investigation finds violations. X claims to be refining safety filters to block such content, though the company declined to provide specifics about implementation timeline or methodology.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
The enforcement push reveals a pattern emerging across jurisdictions: governments moving from abstract AI safety discussions to concrete criminal law when specific harms become visible. The Grok incident provided what policy makers often lack—a clear villain, identifiable victims, and public outrage sufficient to overcome legislative inertia.
Enforcement remains the open question. The technology to generate deepfakes exists across dozens of platforms and open-source models. Targeting major platforms like X may reduce casual creation, but the underlying capability has already dispersed. As one researcher noted, "The technical barrier to creating these images has collapsed."
The legislation also introduces a novel legal concept: criminalizing the request for deepfake creation, not just the act itself. Regulators understand the ecosystem nature of the problem—marketplaces, commissioners, creators, and distributors all playing distinct roles.
UK enforcement begins February 2026, making deepfake nude creation a criminal offense. The Ofcom investigation could result in fines up to £18 million or 10% of X's global revenue. New laws criminalize both creating and requesting non-consensual intimate AI images, and "nudification" tools will be banned under the Crime and Policing Bill. Platform liability extends to hosting such content, not just enabling creation.
The February deadline creates a test case for rapid AI regulation. If UK authorities successfully prosecute early cases, other jurisdictions may adopt similar frameworks. If enforcement proves ineffective against distributed tools and anonymous creators, the limitations of national law against borderless technology become stark.
The deeper question remains whether criminalizing outputs can meaningfully constrain a technology whose core capability—image manipulation—has legitimate uses. We could always edit photos. Now we can edit them convincingly, at scale, with three words of instruction. The UK is betting that criminal law can draw that line. February will show if they're right.


