Parents Should Talk to Children About AI Child Abuse Material, AFP Advises

An increase in ‘deepfake’ child abuse material—including some generated by students—means parents and caregivers need to broach the subject with children.
Parents Should Talk to Children About AI Child Abuse Material, AFP Advises
Facial recognition of a child. watman/depositphotos
Updated:
0:00

The emergence of child abuse images created by artificial intelligence (AI) means parents and carers need to have “open and non-judgemental conversations” with their children about the dangers and harms it causes, the Australian Federal Police (AFP) advise.

They say the Australian Centre to Counter Child Exploitation (ACCCE), which is led by the AFP, has witnessed an increase in the use of AI-generated child abuse material in the past year, including students creating “deepfakes” to harass or embarrass their classmates.

Earlier this month, a teenage boy at a south-western Sydney high school was reported to police and the eSafety Commissioner after he allegedly used AI to create explicit images of female students and then circulated them using fake social media accounts.

And in June last year, about 50 students at Bacchus Marsh Grammar School in Victoria had images taken from their social media accounts and manipulated into deepfake nudes using AI.

A boy was also expelled from Salesian College, a Catholic school in Melbourne’s south-east, after he created fake sexual images of a female teacher which were circulated around the school.

In 2023, a study by the Internet Watch Foundation (IWF) found that in one month, 20,254 AI-generated images were posted to one dark web CSAM forum.

A 2024 update found that 3,512 new AI-generated images had been shared on the same forum, and the first deepfake videos had begun to emerge.

IWF analysts classified 90 percent of the images as “realistic enough to be assessed under the same law as real CSAM.”

While the overall number of AI-generated images is comparatively low (an estimated 0.16 percent of the total CSAM currently in circulation), the IWF warns that the problem will likely get worse as the technology becomes easier to master.

“Evidence of (presumably, lowtech) perpetrators trying and failing to generate AI CSAM on [publicly available AI] platforms has been found shared on dark web forums,” the 2024 report notes.

A comparison of versions of AI-generated output produced by Midjourney using images generated using the same prompt over time. (Internet Watch Foundation)
A comparison of versions of AI-generated output produced by Midjourney using images generated using the same prompt over time. Internet Watch Foundation

They also found that images generated by artificial intelligence, including CSAM, increasingly appeared on the clear web rather than the dark web.

“The pace of AI development has not been slowing, nor has the number of people using AI for criminal purposes decreased,” the report (pdf) says.

“In this context, and in the context of the better, faster, and more accessible tools to generate images and videos, the future continues to hang in the balance.”

Unlike the IWF, which is UK-based, a spokesperson for the AFP told The Epoch Times that ACCCE doesn’t differentiate between AI-generated child abuse images and photographs of real children, so has no measurement of the extent of the problem in Australia.

But the existence of such images in this country was highlighted by the imprisonment of two Australian men last year, one for possession of AI-generated child abuse material, the other for producing it.

The possession offence resulted in a two-year jail term with a non-parole period of 10 months while producing 739 images led to a sentence of 13 months imprisonment.

AFP Commander Helen Schneider said young people might not be aware that using AI to create material featuring their classmates could constitute a criminal offence.

“Children and young people are curious by nature,” she said. “However, anything that depicts the abuse of someone under the age of 18—whether that’s videos, images, drawings or stories—is child abuse material, irrespective of whether it is ’real' or not.”

Only Half of All Parents Discuss Online Safety

“The AFP encourages all parents and guardians to have open and honest conversations with their child on this topic, particularly as AI technology continues to become increasingly accessible and integrated into platforms and products,” Schneider said.

“AFP-led education program ThinkUKnow has free resources available to assist parents and carers navigate these conversations, and information on where to get help if their child is a victim.

“These conversations can include how they interact with technology, what to do if they are exposed to child abuse material, bolstering privacy settings on online accounts, and declining unknown friend or follower requests.”

Research by the ACCCE in 2020 revealed only about half of parents talked to their children about online safety.

The ACCCE brings together specialist expertise and skills in a central hub, supporting investigations into online child sexual exploitation and developing prevention strategies focused on creating a safer online environment.

Members of the public who have information about people involved in child abuse are urged to contact the ACCCE.
Advice and support for parents and carers about how they can help protect children online can be found at the ThinkUKnow website, an AFP-led education program designed to prevent online child sexual exploitation.
There are a range of support services available for anyone impacted by child sexual abuse and online child sexual exploitation.
Rex Widerstrom
Rex Widerstrom
Author
Rex Widerstrom is a New Zealand-based reporter with over 40 years of experience in media, including radio and print. He is currently a presenter for Hutt Radio.