eSafety Commissioner Julie Inman Grant has revealed that sexual deepfakes have exploded by over five times every year on average since 2019.
Deepfakes are fake videos and pictures of a person in which their face or body has been digitally modified by software or AI (artificial intelligence) to make the person appear as somebody else.
“There’s some compelling and concerning data that explicit deepfakes have increased on the internet by as much as 550 percent year on year since 2019,” Ms. Inman Grant told an inquiry hearing on July 23.
“Pornographic videos make up 98 percent of the deepfake material currently online, and 99 percent of that imagery is of women and girls.”
The Commissioner also noted that deepfake image-based abuse was becoming more prevalent and distressing to victim-survivors.
At the same time, Ms. Inman Grant explained the challenges faced by law enforcement when dealing with sexual deepfakes.
According to the Commissioner, deepfakes generated by AI were starting to overwhelm both investigators and support hotlines as this material could be produced and shared faster than it could be reported, triaged, and analysed.
Under the legislation, people who share non-consensual sexual deepfakes could face up to six years in prison.
In addition, those who both create and share this type of material would face a maximum of seven years.
However, it is worth noting that the bill will not penalise the creation of sexual deepfakes if it does not involve sharing the content.
eSafety Supports Sexual Deepfake Legislation
At the hearing, Ms. Inman Grant expressed her support for the legislation as would strengthen her agency’s efforts to tackle abuse materials on the internet.“Criminalisation of these actions is entirely appropriate, serving as an important deterrent function while expressing our collective moral repugnance to this kind of conduct,” she said.
“I believe that the bill adds powerfully to the existing interlocking civil powers and proactive safety interventions championed by eSafety,” she added.
Under the current Online Safety Act, eSafety has the power to require tech companies to take down abusive materials.
With the advent of AI technology, the online content regulator has made efforts to ensure that synthetic materials and deepfakes are covered through all four of its complaint schemes.
“That includes all of our online consent scheme that deals with child sexual abuse and pro-terror content, image-based abuse, adult cyber abuse, and youth-based cyberbullying,” Ms. Inman Grant said.
“And we’ve now received deepfake reports into every scheme with the exception of the adult cyber abuse scheme.”
When asked whether the criminalisation of deepfake offences would help authorities go after companies that actively promoted sexual deepfake apps, the Commissioner said she was unsure.
“I’m not sure how the mechanics of the criminalisation will work. Will you throw executives who are based in Australia in jail, or would it be in terms of fines and penalties?” she questioned.
“I tend to think that criminal provisions will be more effective as a deterrent and punishing perpetrators.”