The emergence of child abuse images created by artificial intelligence (AI) means parents and carers need to have “open and non-judgemental conversations” with their children about the dangers and harms it causes, the Australian Federal Police (AFP) advise.
They say the Australian Centre to Counter Child Exploitation (ACCCE), which is led by the AFP, has witnessed an increase in the use of AI-generated child abuse material in the past year, including students creating “deepfakes” to harass or embarrass their classmates.
Earlier this month, a teenage boy at a south-western Sydney high school was reported to police and the eSafety Commissioner after he allegedly used AI to create explicit images of female students and then circulated them using fake social media accounts.
And in June last year, about 50 students at Bacchus Marsh Grammar School in Victoria had images taken from their social media accounts and manipulated into deepfake nudes using AI.
A boy was also expelled from Salesian College, a Catholic school in Melbourne’s south-east, after he created fake sexual images of a female teacher which were circulated around the school.
In 2023, a study by the Internet Watch Foundation (IWF) found that in one month, 20,254 AI-generated images were posted to one dark web CSAM forum.
A 2024 update found that 3,512 new AI-generated images had been shared on the same forum, and the first deepfake videos had begun to emerge.
IWF analysts classified 90 percent of the images as “realistic enough to be assessed under the same law as real CSAM.”
While the overall number of AI-generated images is comparatively low (an estimated 0.16 percent of the total CSAM currently in circulation), the IWF warns that the problem will likely get worse as the technology becomes easier to master.
“Evidence of (presumably, lowtech) perpetrators trying and failing to generate AI CSAM on [publicly available AI] platforms has been found shared on dark web forums,” the 2024 report notes.

They also found that images generated by artificial intelligence, including CSAM, increasingly appeared on the clear web rather than the dark web.
“In this context, and in the context of the better, faster, and more accessible tools to generate images and videos, the future continues to hang in the balance.”
Unlike the IWF, which is UK-based, a spokesperson for the AFP told The Epoch Times that ACCCE doesn’t differentiate between AI-generated child abuse images and photographs of real children, so has no measurement of the extent of the problem in Australia.
But the existence of such images in this country was highlighted by the imprisonment of two Australian men last year, one for possession of AI-generated child abuse material, the other for producing it.
The possession offence resulted in a two-year jail term with a non-parole period of 10 months while producing 739 images led to a sentence of 13 months imprisonment.
AFP Commander Helen Schneider said young people might not be aware that using AI to create material featuring their classmates could constitute a criminal offence.
Only Half of All Parents Discuss Online Safety
“The AFP encourages all parents and guardians to have open and honest conversations with their child on this topic, particularly as AI technology continues to become increasingly accessible and integrated into platforms and products,” Schneider said.“AFP-led education program ThinkUKnow has free resources available to assist parents and carers navigate these conversations, and information on where to get help if their child is a victim.
“These conversations can include how they interact with technology, what to do if they are exposed to child abuse material, bolstering privacy settings on online accounts, and declining unknown friend or follower requests.”
Research by the ACCCE in 2020 revealed only about half of parents talked to their children about online safety.
The ACCCE brings together specialist expertise and skills in a central hub, supporting investigations into online child sexual exploitation and developing prevention strategies focused on creating a safer online environment.