Facebook announced that it deleted a wave of accounts, pages, and groups linked to the QAnon conspiracy theory.
“We’re constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps,” Facebook stated, adding that when it finds domestic, nongovernment campaigns that it judges to be “seeking to mislead people about who they are and what they are doing while relying on fake accounts,” it purges both inauthentic and authentic accounts, as well as pages and groups that are involved.
The move, which is an outcome of April enforcement actions, focused on 20 Facebook accounts, six groups, and five pages related to QAnon.
While opinions vary as to its nature and intent, QAnon is a movement that started on 4chan and 8chan message boards with a trickle of clandestine-sounding posts, often centered on the theme of big government plots to curb individual liberties and advance so-called deep state and globalist agendas. It grew into a large underground movement with a number of splinter groups and often claims that members of the world’s social, economic, and political elites have engaged in child sex trafficking and cannibalism.
Fake Engagement
Facebook said the objectionable pages, groups, and accounts frequently posted about news and topics that included the upcoming presidential election and candidates as well as the current U.S. administration.Facebook stated: “Our investigation linked this activity to individuals associated with the QAnon network known to spread fringe conspiracy theories. We found this activity as part of our internal investigations into suspected coordinated inauthentic behavior ahead of the 2020 election in the US.”
“Coordinated inauthentic behavior is when groups of pages or people work together to mislead others about who they are or what they’re doing,” Gleicher said.
“When we take down one of these networks, it’s because of their deceptive behavior, it’s not because of the content they’re sharing,” he said.
“The posts themselves may not be false, and may not go against our community standards,” Gleicher added.
Targeting ‘Misinformation’
Facebook has been widely targeting articles, posts, and events during the pandemic, trying to position itself as a neutral arbiter of “misinformation.” In March, Facebook displayed warnings to users on 40 million posts, removing hundreds of thousands it deemed harmful.The company said last week it would start notifying people who interacted with harmful claims about the pandemic from China, utilizing the World Health Organization, which has been linked to the Chinese Communist Party, as a source.
Twitter, Google, and other technology platforms have taken similar steps to Facebook, worrying some experts.
“As a matter of public health, these moves are entirely prudent. But as a matter of free speech, the platforms’ unconstrained power to change the rules virtually overnight is deeply disconcerting,” Evelyn Douek, an affiliate at Harvard University’s Berkman Klein Center for Internet and Society, wrote in an article for The Atlantic.
“Unlike most countries’ emergency constitutions, those of major platforms have no checks or constraints. Are these emergency powers temporary? Will there be any oversight to ensure these powers are being exercised proportionately and evenhandedly? Are data being collected to assess the effectiveness of these measures or their cost to society, and will those data be available to independent researchers?”