YouTube announced Thursday that it will begin to warn users before they post comments that may be deemed offensive to other people.
As part of an effort to “keep comments respectful,” the company said the new feature will issue a warning to users of the video-sharing platform if its automated systems detect a potentially hurtful comment.
The firm said it will also begin to ask its users to provide demographic information in a bid to find patterns of hate speech “that may affect some communities more than others.”
“We’ll then look closely at how content from different communities is treated in our search and discovery and monetization systems,” YouTube said. “We’ll also be looking for possible patterns of hate, harassment, and discrimination that may affect some communities more than others.”
The feature is currently being rolled out on Android, and will eventually move to other platforms, the company said.
YouTube in December 2019 said that it would soon begin adopting stricter policies on harassment or “veiled or implied threats.” It said that since early last year, the number of daily hate speech comment removals has increased by 46-fold.
“This is the most hate speech terminations in a single quarter and 3x more than the previous high from Q2 2019 when we updated our hate speech policy,” it added.
The platform’s latest feature is similar to one recently rolled out by Instagram, which reminds users in “nudge warning” pop-ups to “keep comments respectful” before posting. Instagram has asserted that the feature has been effective in reducing the number of offensive comments.
Facebook meanwhile announced Thursday that in the coming weeks, it will begin to take down claims about COVID-19 vaccines on the social media platform and Instagram that have been labeled debunked by public health experts and could lead to “imminent physical harm.”
Posts that fall afoul of the policy could include claims about vaccine safety, efficacy, ingredients, or side effects, it said.