Facebook has removed thousands of groups, pages, and ads related to Antifa and other violent-inciting militia organizations from the social media platform.
Facebook said those pages, groups, and Instagram accounts are tied to offline anarchist groups that support violent acts amidst protests.
The company didn’t elaborate on whether they’re referring to the current wave of violent protests and riots after the death of George Floyd.
The social media platform said that their scope of policy enforcement had expanded to include behaviors of celebrating violent acts.
“We have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior,” read the statement. “So today we are expanding our Dangerous Individuals and Organizations policy to address organizations and movements that have demonstrated significant risks to public safety but do not meet the rigorous criteria to be designated as a dangerous organization and banned from having any presence on our platform.”
Beside those groups and pages related to Antifa and violence-inciting militia groups, Facebook also removed over 790 groups, 100 pages, and 1,500 ads tied to QAnon.
While opinions vary as to its nature and intent, QAnon is a movement that started on 4chan and 8chan message boards with a trickle of clandestine-sounding posts, often centered on the theme of big government plots to curb individual liberties and advance so-called deep state and globalist agendas. It grew into a large underground movement with a number of splinter groups and sometimes claims that members of the world’s social, economic, and political elites have engaged in child sex trafficking, abuse, and cannibalism.
Facebook is not the only social media platform that took systematic restrictive action against QAnon.
Twitter has banned more than 7,000 QAnon-related accounts and limited the reach of around 150,000 others as part of a suppression of what the company says is behavior that could lead to “offline harm” in July 2020.
“We’ve been clear that we will take strong enforcement action on behavior that has the potential to lead to offline harm,” Twitter wrote in a July 21 tweet, characterizing its actions as “work at scale to protect the public conversation in the face of evolving threats.”