Over 70 Percent of YouTube Videos Viewers Deemed Objectionable Were Recommended by YouTube’s Own Algorithm: Study

Over 70 Percent of YouTube Videos Viewers Deemed Objectionable Were Recommended by YouTube’s Own Algorithm: Study
A woman with a smartphone walks past a billboard advertisement for YouTube in Berlin on Sept. 27, 2019. Sean Gallup/Getty Images
Tom Ozimek
Updated:

A new study by software nonprofit Mozilla Foundation found that 71 percent of videos study participants deemed objectionable were suggested to them by YouTube’s own recommendation algorithm.

“Research volunteers encountered a range of regrettable videos, reporting everything from COVID fear-mongering to political misinformation to wildly inappropriate ‘children’s’ cartoons,” Mozilla Foundation wrote in a statement.

The largest-ever crowdsourced probe into YouTube’s controversial recommendation algorithm found that the automated software continues to recommend videos that viewers considered “disturbing and hateful,” Mozilla said, including ones that violate YouTube’s own content policies.

The study involved nearly 38,000 YouTube users across 91 countries who volunteered data to Mozilla about the “regrettable experiences” they have had on the world’s most popular video content platform. Overall, participants flagged 3,362 regrettable videos between July 2020 and May 2021, with the most frequent “regret” categories being misinformation, violent or graphic content, hate speech, and spam/scams.

Mozilla said that almost 200 videos that YouTube’s algorithm recommended to volunteers have since been removed from the platform, including several that YouTube deemed violated their own policies.

“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people,” said Brandi Geurkink, Mozilla’s senior manager of advocacy, in a statement. “Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies.

“Mozilla hopes that these findings—which are just the tip of the iceberg—will convince the public and lawmakers of the urgent need for better transparency into YouTube’s AI.”

A YouTube spokesperson told The Epoch Times that the company has taken steps to reduce the recommendations of content it considers harmful to less than 1 percent of videos viewed on the platform. The company also said it welcomes more research on this front and is exploring options to bring in external researchers to study its systems.

Mozilla’s report provides fresh insight into YouTube’s secretive recommendation algorithm, which the company itself acknowledged in a 2019 blog post was in need of tweaks. YouTube said that, since January 2019, it had “launched over 30 different changes to reduce recommendations of borderline content and harmful misinformation,” with the company claiming that its actions have led to an average 70 percent drop in watch time for this kind of content.

“That said, there will always be content on YouTube that brushes up against our policies, but doesn’t quite cross the line,” YouTube said.

Mozilla’s report also found that the rate of “regrettable” videos was over 60 percent higher in non-English speaking countries, most notably in Brazil, Germany, and France.

This article has been updated to reflect receipt of a statement from a YouTube spokesperson.
Tom Ozimek
Tom Ozimek
Reporter
Tom Ozimek is a senior reporter for The Epoch Times. He has a broad background in journalism, deposit insurance, marketing and communications, and adult education.
twitter
Related Topics