A week after the Australian government proposed new laws to target deepfake artificial intelligence (AI) images, police have arrested a teenage boy following a disturbing incident at a Melbourne private school.
Around 50 female students were targeted, with AI being used to create fake computer-generated images and videos using photos of the girls’ faces.
In a statement, Victoria Police announced the arrest of the Bacchus Marsh Grammar student.
“Police have arrested a teenager in relation to explicit images being circulated online,” a police spokesperson told The Epoch Times in a statement. “He was released pending further enquiries. The investigation remains ongoing.”
Bacchus Marsh Grammar acting principal Kevin Richardson told The Epoch Times the school was taking the matter seriously and had been in contact with police.
“Bacchus Marsh Grammar has been made aware of the production and circulation of video content that includes images of students from the school community,” he said.
“The wellbeing of Bacchus Marsh Grammar students and their families is of paramount importance to the School and is being addressed, all students affected are being offered support from our wellbeing staff.”
Concerns Children are Engaging in Sexual Harm
Sexual Assault Services Victoria chief executive Kathleen Maltzahn told AAP the circulation of the images showed there was a lack of information around the law.AI programs that allow the creation of such fake pornographic content are readily available to internet users.
“We’re seeing significant levels of children using sexual harm against others,” Ms. Maltzahn told AAP.
She said there was a need to get ahead of the deepfake issue to work with schools and prevent children from becoming engaged with it.
Digitally created and altered sexually explicit material that was shared without consent, was a damaging and deeply distressing form of abuse, according to the office of Attorney General Mark Dreyfus in a statement about introducing deepfake laws.
His office said such acts were overwhelmingly targeted towards women and girls, “perpetuating harmful gender stereotypes and contributing to gender-based violence.”
The Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 was introduced last week and will impose criminal penalties on those producing and sharing damaging deepfake content.
Under the proposed laws, offenders could face six to seven years’ imprisonment depending on the level of offending.
The proposed new laws would build on other actions taken by the government. These include increased funding for the eSafety commissioner, advancing the review of the Online Safety Act a year ahead of schedule, and addressing harmful practices such as doxxing.
Data Scientist Warns: Fake Images Causing Real Harm to Individuals
Data scientist Ian Oppermann told The Epoch Times that images impact people whether real or fake.One girl was described as being made physically unwell as a consequence of the faked images circulated by the school students.
Mr. Oppermann underscored the importance of not using someone’s identity or features without consent, especially given the challenges of defining meaningful consent in the era of easily generated fake images.
“Part of the challenge is that we can now easily do things with AI that we had previously not considered,” he said.
“Generation of convincing deep fakes is a new phenomenon, [as is] the ability for anyone to generate deep fakes.”
If the faked image impacts the reputation or standing of the individual, then the individual is arguably more significantly harmed. The individual’s personal information has also arguably been misused, which breaches the Privacy Act.
While deepfake images can be of anything, Mr. Oppermann refers to media reports stating that 96 percent are used for pornography, with 90 percent of them containing women.
The psychological damage and distress caused by such images loom large as a violation of a victim’s modesty, privacy, and personal rights.
Mr. Oppermann recommends treating faked pornography as seriously as real material, implementing stricter age restrictions and identity verification, and enhancing education on AI risks for youths.
He also calls for wider adoption of provenance metadata on digital products.
This metadata includes details about the content’s origin, creator, and construction process, enabling users to differentiate between real and synthetic data products such as videos, images, or recordings.
He believes this measure would ultimately “return respect” to the digital sphere.