AI Generated Child Abuse Images Increasingly Found Online

The ’realistic' imagery is illegal, but watchdog finds it is not confined to the dark web and can be stumbled on in AI forums by unsuspecting internet users.
AI Generated Child Abuse Images Increasingly Found Online
The logo of the ChatGPT application on a laptop screen (R) in Frankfurt am Main, western Germany, on Feb. 26, 2024. Kirill Kudryavtsev/AFP via Getty Images
Rachel Roberts
Updated:
0:00

AI-generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, an internet watchdog has warned, saying it is not just confined to the dark web.

The Internet Watch Foundation (IWF), a charity which finds and removes child sexual abuse content from the internet, is calling for updated laws to deal with the new phenomenon of so-called “deep fake” imagery being used in this way.

The latest analysis from the IWF revealed that in the past six months alone, it has received more reports of AI-generated abuse content than in the 12 months prior to that.
AI-generated images can be extremely realistic and hard to differentiate from imagery of real children, the organisation said. Such content is criminal under UK law.

‘Distressing’ to Encounter

The IWF found 99 percent of this content on publicly accessible areas of the internet, with the watchdog warning of the distressing nature of encountering such images.

In its data, it revealed that the majority of reports it received (78 percent) came from unsuspecting members of the public who had stumbled across the imagery on sites such as forums or AI galleries. The remainder were discovered by IWF analysts through proactive searching.

The organisation found that more than half of the AI-generated content it discovered in the last six months was hosted on servers in just two countries: Russia and the United States.

An internet content analyst who works at the IWF, identified only as Jeff, said, “I find it really chilling, as it feels like we are at a tipping point and the potential is there for organisations like ourselves and the police to be overwhelmed by hundreds and hundreds of new images, where we don’t always know if there is a real child that needs help.”

Derek Ray-Hill, interim chief executive of the IWF, said that people should not mistake the use of deep fake imagery in this way for a victimless crime.

“People can be under no illusion that AI-generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.

“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet,” he said in a statement.

He added that the protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry “above any thought of profit.”

Ray-Hill called for new legislation to bring the law “up to speed for the digital age, and see tangible measures being put in place that address potential risks.”

Many campaigners have called for stricter regulation around the training and development of AI models, to ensure they do not generate harmful or dangerous content, and for AI platforms to refuse to fulfil any requests or queries which could result in such material being created.

‘Real-Life Threat’

Assistant Chief Constable Becky Riggs, child protection and abuse investigation lead at the National Police Chiefs’ Council, warned paedophiles that they will not be able to evade the law by using AI-generated imagery.

She said in a statement: “The scale of online child sexual abuse and imagery is frightening, and we know that the increased use of artificial intelligence to generate abusive images poses a real-life threat to children.

“Policing continues to work proactively to pursue offenders, including through our specialist undercover units, who disrupt child abusers online every day, and this is no different for AI-generated imagery.”

Riggs called on Big Tech firms to take more responsibility for the creation and sharing of such imagery.

“While we will continue to relentlessly pursue these predators and safeguard victims, we must see action from tech companies to do more under the Online Safety Act to make their platforms safe places for children and young people.

“This includes and brings into sharp focus those companies responsible for the developing use of AI and the necessary safeguards required to prevent it being used at scale, as we are now seeing.”

She added that police will continue to work closely with the National Crime Agency, government, and industry to harness technology to fight the online abuse and exploitation of children.

Existing laws have previously been updated to ensure that non-photographic pornographic imagery or imagery depicting child abuse is illegal in the UK.

The Protection of Children Act 1978 was amended by the Criminal Justice and Public Order Act 1994 to criminalise the taking, distribution, and possession of an “indecent photograph or pseudo-photograph of a child.”

The Coroners and Justice Act 2009 criminalises the possession of “a prohibited image of a child,” meaning non-photographic ones, such as cartoons, drawings, or animations.

Rachel Roberts
Rachel Roberts
Author
Rachel Roberts is a London-based journalist with a background in local then national news. She focuses on health and education stories and has a particular interest in vaccines and issues impacting children.