Children’s Commissioner Calls for Ban on AI ‘Nudification’ Apps

Possessing AI-generated child sexual images is illegal, but the technology that creates them currently remains lawful.
Children’s Commissioner Calls for Ban on AI ‘Nudification’ Apps
A teenager uses her mobile phone to access social media in a file photo. Spencer Platt/Getty Images
Evgenia Filimianova
Updated:
0:00

The children’s commissioner has called on the government to ban tools using deepfake technology to create naked images of children.

Bespoke nudification apps enable users to generate sexually explicit images of real people, bringing “alarming risks” to children’s safety online, Dame Rachel de Souza has warned.

In a report published on Monday, she revealed that AI tools capable of producing illegal images of children are widely accessible through search engines and app stores—and are now cheaper than ever to use.

The report highlights how girls and young women are disproportionately targeted by such technology.

One 16-year-old girl questioned why these apps are even allowed to exist, saying: “It’s so easily accessible that anyone could just make those photos, put them on the internet and ... create a big controversy just by clicking a few buttons on a mobile app.”

While it is illegal to possess AI-generated sexual images of children, current laws do not prohibit the AI models that produce them.

De Souza has called on ministers to outlaw nudification apps entirely and to tighten regulations to hold technology providers accountable.

“The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist. They have no place in our society.

“Tools using deepfake technology to create naked images of children should not be legal and I’m calling on the government to take decisive action to ban them, instead of allowing them to go unchecked with extreme real-world consequences,” she said.

Existing Legislation

In early 2025, the government announced plans to extend protections against AI-generated sexual content involving adults through the Data Use and Access Bill, alongside measures in the Crime and Policing Bill.

Both pieces of legislation remain under parliamentary review.

However, these laws focus on criminalising individuals who create or share such content and stop short of banning the AI technology itself.

Under the Online Safety Act 2023, online platforms must prevent UK users from accessing illegal material, including child sexual abuse content. The Act also requires tech companies to protect children from harmful material, such as pornography.

Further protections are due in July 2025 when Ofcom’s new Children’s Codes come into force, aimed at shielding children from adult services and harmful content. However, these measures will not make nudification services illegal.
The children’s commissioner’s report warned that without decisive legislative action, children will remain at risk from emerging AI threats.

Growing Risks of Deepfake Technology

An estimated 99 percent of sexually explicit deepfakes circulating online feature women and girls. These AI tools are often trained on vast datasets of solely female bodies.

While high-profile women, like Emma Watson, Scarlett Johansson, and Millie Bobby Brown are frequent targets, the report warned of risks to ordinary women and schoolchildren.

Children themselves are being exposed to this harmful material at alarming rates.

A 2024 survey by Internet Matters revealed that 13 percent of UK teenagers had seen, received or shared a deepfake nude.

Another study by Girlguiding found that more than a quarter of 13-18-year-olds had encountered sexually explicit deepfakes of celebrities, friends, teachers—or even themselves.

The report also found that children with vulnerabilities are disproportionately at risk, with 25 percent reporting exposure to sexually explicit deepfakes—more than twice the rate of their non-vulnerable peers.

The document cited cases where victims have experienced PTSD, suicidal thoughts, and in tragic instances, death—such as 14-year-old Mia Janin, who took her life in 2021 after being targeted by online abuse that included deepfake images.

The Commissioner’s Office found nudification apps openly advertised on social media platforms like Instagram, Telegram, and X (formerly Twitter), with some accounts holding X Premium status—traditionally associated with verified users.

Searches on major app stores also returned nudification tools, despite previous crackdowns, while Google and Bing searches for terms like “deep nude” or “undress app” frequently led to explicit content within a few clicks.

The rise of what Ofcom has termed the “deepfake economy”—a network of websites, app developers, and individual users profiting from non-consensual sexual imagery—has only worsened the threat. Open-source AI models, which can be freely adapted, are being exploited to create customised nudification services, with little oversight or accountability.

De Souza said children who took part in focus groups for the report had voiced fears about the misuse of nudification app technology.

“They fear that anyone—a stranger, a classmate, or even a friend—could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps,” she said.

Evgenia Filimianova
Evgenia Filimianova
Author
Evgenia Filimianova is a UK-based journalist covering a wide range of national stories, with a particular interest in UK politics, parliamentary proceedings and socioeconomic issues.