Seven of the world’s largest Big Tech firms have been told to do better in tackling online child sexual exploitation by the Australian government after a report undertaken by the eSafety Commissioner found their response inadequate.
“We’re talking about illegal content that depicts the sexual abuse of children—and it is unacceptable that tech giants with long-term knowledge of extensive child sexual exploitation, access to existing technical tools, and significant resources are not doing everything they can to stamp this out on their platforms,” Inman Grant said.
“We don’t need platitudes; we need to see more meaningful action.”
They were given 28 days to respond to the notice or risk fines of up to $550,000 a day.
At the time, the regulator said the country had seen a surge in reports of child sexual exploitation from the start of the pandemic, adding that “technology was weaponised to abuse children.”
“The harm experienced by survivors is perpetuated when platforms and services fail to detect and remove the content,” the regulator said.
Apple, Microsoft Highlighted By Commissioner
The regulator found that two of the world’s largest tech firms, Apple and Microsoft, do not attempt to proactively detect child abuse material stored on iCloud and OneDrive services.This is despite the common availability of PhotoDNA detection technology, which was originally developed by Microsoft. It is now used by tech companies around the world to scan for known child sexual abuse images and videos, with a false positive rate of one in 50 billion, the commissioner said.
Apple and Microsoft also admitted that they do not use any technology to detect live-streaming of child sexual abuse in video chats on Skype, Microsoft Teams, or FaceTime—despite Skype being a commonly used platform.
However, Microsoft received praise from the commissioner for its in-service ability to report on the exploitation of children.
“There is no in-service reporting on Apple or Omegle, with users required to hunt for an email address on their websites—with no guarantees they will be responded to,” Inman Grant said.
“Fundamental to safety by design and the Basic Online Safety Expectations are easily discoverable ways to report abuse. If it isn’t being detected and it cannot be reported, then we can never really understand the true scale of the problem.”
The regulator also unearthed large differences in how rapidly the tech companies responded to reports of child sexual exploitation and abuse on their platforms, with the time ranging from an average of four minutes from Snapchat, to two days for Microsoft.”
“Speed isn’t everything, but every minute counts when a child is at risk,” she said.
Grooming was also spotlighted with Microsoft, Skype, Snap, and Apple admitting to the regulator that they do not use any tools to help detect this on their platforms, including Outlook.com Teams, OneDrive, Skype Messaging, Snapchat’s direct chat, snaps, and Apple iMessage.
Meta, WhatsApp Struggling to Stop Repeat Offenders
The report also noted that firms like Meta and Whatsapp struggle with repeat offenders, with Meta noting in their response that if an account is banned on Facebook, the ban does not always flow through to Instagram. Likewise, when a user is banned on WhatsApp, that information is not then given to Facebook or Instagram.“This is a significant problem because WhatsApp report they ban 300,000 accounts for child sexual exploitation and abuse material each month—that’s 3.6 million accounts every year,” Inman Grant said.
“What’s stopping all those offenders creating new accounts on Facebook or Instagram and continuing to abuse children?”