Generated Artificial Intelligence Makes It Easier for Potential Child Abuse Material to Circulate

Generated Artificial Intelligence Makes It Easier for Potential Child Abuse Material to Circulate
An AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software taken in Helsinki, Finland, on June 12, 2023. Olivier Morin/AFP via Getty Images
Updated:
0:00

Improved technology is making it easier for predators to manipulate and proliferate material online, Australia’s eSafety Commissioner has told a parliamentary committee investigating the dangers associated with generative artificial intelligence (AI) technology.

This follows the commissioner’s release of a report, ‘Trends Position Statement Generative AI’ (pdf), highlighting how AI-generated content can potentially influence public perceptions and values, including extremist ideologies.

“We’re looking at the risk of generative AI acts that could potentially lead to class one content being created, such as child sexual abuse material or pro-terror material,” Executive  Morag Bond said in a hearing on Aug. 23.

“It’s very much an issue for us.”

Generation AI Has Captured the Imagination

Generative AI can produce content by drawing from enormous datasets inspired by the structure of the human brain—it can mimic, generate texts, compose music and write stories, but that is just the tip of the iceberg.

The report highlights how Chatbots and multimodal models have the potential to generate highly personalised, emotive, and invasive content that may appear authoritative, and whether it is intentional or by accident, can be harmful.

The report gave an example of how a 13-year-old was advised by Snapchat’s AI chatbot on how she could lie to her parents to meet a 31-year-old man.

It can also be misused in other ways, such as generating photos and videos and using the content to blackmail or threaten individuals with the report linking to a public announcement by the Federal Bureau of Investigation that noted as of April, they had observed an uptick in ’sextortion' victims reporting the use of fake images or videos created from content posted on their social media sites, web postings or captured during video chats.

The Sophistication of the Technology Creates Endless Possibilities

AI can also imitate conversations, which makes it ripe for fraud and other cyber-crime to flourish. Offenders have the ability to produce realistic AI-generated graphic images, which makes authentication nearly impossible to do, and that could have an impact on how crime is solved.

In terms of mitigating harms and risks, determining who is responsible becomes an important consideration, the report stated.

It calls for the online industry to take a lead role by adopting a safety-by-design approach, built on three principles; service provider responsibility, user empowerment and autonomy and transparency and accountability.

It is not all doom and gloom; generative AI can be cost-effective—and is used to automate repetitive and tedious tasks. It can help upscale online support services by being able to direct customer service matters to the right person. It is also a useful tool for tracking and moderating, which is helpful for some companies.

Related Topics