Thousands of Sexual AI Apps Available on Smartphones: eSafety Commissioner

eSafety Commissioner said those apps make it simple and cost-free for the perpetrator while inflicting incalculable devastation on victims.
Thousands of Sexual AI Apps Available on Smartphones: eSafety Commissioner
Various AI apps are seen on a smartphone screen in Oslo, Norway, on July 12, 2023. (Olivier Morin/AFP via Getty Images)
Alfred Bui
Updated:
0:00

The proliferation of sexual AI (artificial intelligence) applications on smartphones has made it easier for perpetrators to commit offences, a parliamentary committee has been told.

At a recent inquiry hearing on a new sexual deepfakes bill, eSafety Commissioner Julie Inman Grant said many apps designed for nefarious purposes were currently available in app stores.

She gave the example of some apps openly promoting their ability to modify pictures of girls using AI.

“Shockingly, thousands of open-source AI apps like these have proliferated online and are often free and easy to use by anyone with a smartphone,” Ms. Inman Grant told the Legal and Constitutional Affairs Legislation Committee.

“So these apps make it simple and cost-free for the perpetrator, while the cost to the target is one of lingering and incalculable devastation.

“Some might wonder why apps like this are allowed to exist at all given their primary purpose is to sexualise, humiliate, demoralise, denigrate and create child sexual abuse material of girls.”

Concerns about Open-Source AI Apps

eSafety was concerned that open-source, sexual AI apps were using sophisticated monetisation tactics and becoming more popular on mainstream social media platforms, especially to younger audiences.

Citing a recent study, Ms. Inman Grant said there was a 2,408 percent increase in referral links to non-consensual pornographic deepfake websites across Reddit and X (formerly Twitter) in 2023 alone.

“We’re also concerned about the impacts of multimodal forms of generative AI—for example, creating hyper-realistic synthetic child sexual abuse material via text prompt to video—as well as highly accurate voice cloning and manipulated chatbots that could supercharge grooming, sextortion, and other forms of sexual exploitation of young people at scale,” she said.

To tackle the risks of those apps, the Commissioner said her agency had submitted mandatory standards to the parliament to strengthen regulations on this issue.

At the same time, she believed tech companies should bear the burden of reducing the risks on their platforms.

“The onus will fall on AI companies to do more to reduce the risk that their platforms can be used to generate highly damaging content, such as the synthetic child sexual abuse material and deep faked image-based abuse involving under 18s,” Ms. Inman Grant said.

“These robust safety standards will also apply to the platform libraries that host and distribute these apps.

“These companies must have terms of service that are robustly enforced and clear reporting mechanisms to ensure that the apps they are hosting are not being weaponised to abuse, humiliate, and denigrate children.”

Law Enforcement Efforts Fall Behind

While authorities were taking action to mitigate AI risks, Ms. Inman Grant said the advancement of the technology had posed significant challenges for law enforcement.

“It’s worth noting that deepfake detection tools are lagging significantly behind the freely available tools being created to perpetuate deepfakes,” she said.

“And these [deepfakes] are becoming so realistic that they’re difficult to discern from the naked eye.”

The Commissioner also added that deepfakes generated by AI were starting to overwhelm both investigators and support hotlines as this material could be produced and shared faster than it could be reported, triaged, and analysed.

eSafety’s Informal Approach to Sexual Abuse Materials

Ms. Inman Grant told the committee that eSafety often took an informal approach when dealing with sexual abuse materials despite the availability of other formal means.

Under current laws, the online content regulator can approach online service providers informally to ask them to remove illegal or restricted content.

“We do have a 90 percent success rate in terms of getting image-based content down from almost exclusively overseas domiciled sites,” she said.

“We choose those [informal] pathways because they’re much quicker, and we know the more quickly we can get the harmful content down, the more relief we’re providing to the victim.”

Since the introduction of the Online Safety Act 2021, eSafety has issued 10 formal warnings, 13 remedial directions, and 34 more removal notices to entities in Australia.

Alfred Bui is an Australian reporter based in Melbourne and focuses on local and business news. He is a former small business owner and has two master’s degrees in business and business law. Contact him at [email protected].