The children’s commissioner has called on the government to ban tools using deepfake technology to create naked images of children.
Bespoke nudification apps enable users to generate sexually explicit images of real people, bringing “alarming risks” to children’s safety online, Dame Rachel de Souza has warned.
The report highlights how girls and young women are disproportionately targeted by such technology.
While it is illegal to possess AI-generated sexual images of children, current laws do not prohibit the AI models that produce them.
De Souza has called on ministers to outlaw nudification apps entirely and to tighten regulations to hold technology providers accountable.
“The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist. They have no place in our society.
Existing Legislation
In early 2025, the government announced plans to extend protections against AI-generated sexual content involving adults through the Data Use and Access Bill, alongside measures in the Crime and Policing Bill.Both pieces of legislation remain under parliamentary review.
However, these laws focus on criminalising individuals who create or share such content and stop short of banning the AI technology itself.
Under the Online Safety Act 2023, online platforms must prevent UK users from accessing illegal material, including child sexual abuse content. The Act also requires tech companies to protect children from harmful material, such as pornography.
Growing Risks of Deepfake Technology
An estimated 99 percent of sexually explicit deepfakes circulating online feature women and girls. These AI tools are often trained on vast datasets of solely female bodies.While high-profile women, like Emma Watson, Scarlett Johansson, and Millie Bobby Brown are frequent targets, the report warned of risks to ordinary women and schoolchildren.
Children themselves are being exposed to this harmful material at alarming rates.
Another study by Girlguiding found that more than a quarter of 13-18-year-olds had encountered sexually explicit deepfakes of celebrities, friends, teachers—or even themselves.
The report also found that children with vulnerabilities are disproportionately at risk, with 25 percent reporting exposure to sexually explicit deepfakes—more than twice the rate of their non-vulnerable peers.
The Commissioner’s Office found nudification apps openly advertised on social media platforms like Instagram, Telegram, and X (formerly Twitter), with some accounts holding X Premium status—traditionally associated with verified users.
Searches on major app stores also returned nudification tools, despite previous crackdowns, while Google and Bing searches for terms like “deep nude” or “undress app” frequently led to explicit content within a few clicks.
The rise of what Ofcom has termed the “deepfake economy”—a network of websites, app developers, and individual users profiting from non-consensual sexual imagery—has only worsened the threat. Open-source AI models, which can be freely adapted, are being exploited to create customised nudification services, with little oversight or accountability.
De Souza said children who took part in focus groups for the report had voiced fears about the misuse of nudification app technology.
“They fear that anyone—a stranger, a classmate, or even a friend—could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps,” she said.