The Australian eSafety Commissioner (eSafety) has commenced civil penalty proceedings against social media giant X, formerly known as Twitter, for failing to comply with government requirements regarding child sexual exploitation materials.
The commission alleged that X did not comply with the notice by not preparing a report in the required manner and form.
In addition, eSafety said X did not respond or failed to respond truthfully and accurately to some questions in the notice.
For example, the company did not explain how much time it needed to respond to reports of child sexual exploitation, the measures it had implemented to detect child sexual exploitation in live streams, and the tools and technologies it used to detect those abusive materials.
X also inadequately disclosed the number of safety and public policy staff remaining after tech billionaire Elon Musk acquired the platform in October 2022 and implemented several rounds of job cuts.
While other social media platforms, including Google, did not respond well to the transparency notice, eSafety found X’s non-compliance very serious.
In September, eSafety issued a $610,500 ($US414,400) fine to X and gave the company 28 days to request the withdrawal of the infringement notice or pay the penalty.
X did not pay the fine nor requested a withdrawal, opting instead for a judicial review of eSafety’s transparency and infringement notices.
Other Social Media Platforms Not Doing Enough
While X is the only tech company fined, eSafety found that other social media platforms also did not do well in tackling child sexual exploitation materials in Australia.“This latest report also reveals similar gaps in how these five tech companies are dealing with the problem and how they are tackling the rise in sexual extortion, and we need them all to do better.
“What we are talking about here are serious crimes playing out on these platforms committed by predatory adults against innocent children, and the community expects every tech company to be taking meaningful action.”
Among the platforms, Discord did not take any measures to detect child sexual exploitation in live streams, citing “prohibitively expensive” costs.
The company also did not use any language analysis technology to detect child sexual abuse activities, such as sexual extortion, across its services.
Google only used such technology on YouTube, but not on Chat, Gmail, Meet, and Messages.
Furthermore, Google and Discord did not block links to known child sexual exploitation materials nor used technology to detect grooming in some or all of their services.