An advocate says Big Tech should take the responsibility to curb abusive content on social media as they “make a heap of money” out of their sites.
Anti-trolling campaigner and journalist Erin Molan on Tuesday said people can “get absolutely annihilated and torn to shreds” by anonymous trolls, yet seeking help from the social media platforms themselves or law enforcement is “almost impossible.”
“They make a heap of money out of these platforms, an enormous amount of money… with that comes responsibility.”
She recounted some of the “horrific” abuse which made her fear for her life and her young daughter’s safety, adding that she was failed by the social media giant’s response.
“[On] Facebook, I recorded some terrific messages from an account, and the account kept being recreated. I would block it, it would be recreated… That was about trying to kill my child within my stomach.”
“You feel like you’re banging your head against a brick wall as you look at their business model. Advertising is the biggest thing for them... They’d love one person to have 8000 accounts because it gives them more people to sell to advertisers.”
Criminologist Michael Salter told the committee Molan’s experience of reporting abuse to social media companies was common among victims.
“We’re asking for transparency because far too often what we’re provided from social media company reports on these issues ... is statistics that are most friendly to them,” he said.
“Having basic safety expectations built into platforms from the get-go is not too much to expect from an online service provider.”
Meanwhile, child safety advocate Sonya Ryan said many social media platforms were unwilling to cooperate with law enforcement as there is "more focus on privacy than there is on the protection and safety of young people.”
In its submission, Twitter said it recognised the need to “balance tackling harm with protecting a free and secure open internet.” But it also warned that any hasty policy decisions or rushed legal regimes would lead to consequences that ”stretch far beyond today’s headlines, and are bigger than any single company.”
Meanwhile, Meta (Facebook) said it has cut the prevalence of “hate speech” content by more than half within the past year and is proactively detecting more than 99 percent of content considered “seriously harmful.”
TikTok noted that between April and June 2021, more than 81 million videos were taken off the platform for violating its guidelines.
Of those videos, TikTok said it identified and removed 93 percent within 24 hours of posting, 94.1 percent before a user reported them, and 87.5 percent with zero views.