Facebook’s ‘Supreme Court’ Overrules It, Orders Banned Posts Restored

Facebook’s ‘Supreme Court’ Overrules It, Orders Banned Posts Restored
A Facebook logo is displayed on a smartphone in this illustration taken on Jan. 6, 2020. Dado Ruvic/Reuters
Matthew Vadum
Updated:

Facebook’s new “Supreme Court” ruled against Facebook four out of five times in its first batch of content-moderation rulings released to the public.

The company’s recently staffed Oversight Board enjoys an independent funding stream and the company can’t fire its members. Facebook has vowed to obey the board’s rulings unless contrary to law.

Facebook currently has 2.7 billion users worldwide and 15,000 content moderators, according to Ars Technica.
The rulings dealing with takedown decisions by content moderators, who reportedly work in “sweatshop” conditions, offer some insight into Facebook’s byzantine, seemingly arbitrary content-moderation process in which the company deletes posts, offering vague justifications for the censorship, or without explaining why.
Conservatives say Facebook engages in viewpoint discrimination and frequently censors posts on topics of interest to them such as election fraud and the legitimacy of the 2020 presidential election, the CCP virus that causes the disease COVID-19, Black Lives Matter, and the transgender movement. In July 2019, Facebook banned The Epoch Times from advertising, without giving clear reasons for doing so.
Facebook removed a post with the Declaration of Independence in it in 2018 on the grounds that America’s founding document was hate speech; it soon reversed that decision after a public outcry.

Before that, Facebook took down the iconic 1972 “Napalm Girl” photograph showing children running from a bombed-out village during the Vietnam War, citing its policy against child nudity.

The company reversed that decision, too.

“After hearing from our community, we looked again at how our Community Standards were applied in this case,” Facebook said, acknowledging “the history and global importance of this image in documenting a particular moment in time.”

In September 2019, Facebook CEO Mark Zuckerberg unveiled his plan to create the board.

“If someone disagrees with a decision we’ve made, they can appeal to us first, and soon they will be able to further appeal this to the independent board,” he wrote in a letter at the time. “As an independent organization, we hope it gives people confidence that their views will be heard and that Facebook doesn’t have the ultimate power over their expression.”

Board members serve a maximum of three three-year terms each and are paid an amount that the company hasn’t disclosed. The company is allowed to seek an “automatic and expedited review” in exceptional circumstances, “when content could result in urgent real-world consequences,” such as, for example, if someone were live-streaming a murder.

In the only Oversight Board ruling dealing with a user from the United States, Case Decision 2020-005-FB-UA, that body reversed Facebook’s decision to take down a post that the company had claimed violated its Community Standard on Dangerous Individuals and Organizations.

In October 2020, a user had posted a quotation that was incorrectly attributed to Joseph Goebbels, the Reich minister of propaganda in Nazi Germany. It wasn’t accompanied by Nazi symbols or even a photograph of Goebbels, who killed himself in 1945 as the Soviet Army was closing in on his location in Berlin. The quotation, in English, claimed that, instead of appealing to intellectuals, arguments should appeal to emotions and instincts, and “that truth does not matter and is subordinate to tactics and psychology.”

The user, who wasn’t identified in the case decision, said his or her intent was to draw a comparison between the sentiment in the quotation and the presidency of Donald Trump. The board held that the quotation “did not support the Nazi party’s ideology or the regime’s acts of hate and violence,” adding that comments on the post from the user’s friends supported the user’s claim that he or she sought to compare the Trump presidency to the Nazi regime.

Facebook claimed that posts sharing a quotation attributed to a dangerous individual are treated as expressing support for them, unless the user provides additional context to make their intent explicit. The company said it removed the post because the user didn’t make clear that they shared the quotation to condemn the purported author of it, to counter extremism or hate speech, or for academic or news purposes.

The Oversight Board determined that “these rules were not made sufficiently clear to users.”

The board didn’t rely on the First Amendment to the U.S. Constitution, which protects free speech in the United States, instead referencing documents adopted by international bodies, such as the U.N. Guiding Principles on Business and Human Rights, the International Covenant on Civil and Political Rights, and the International Convention on the Elimination of All Forms of Racial Discrimination.

In Case Decision 2020-002-FB-UA, the board overturned Facebook’s decision to take down an October 2020 post from a user in Myanmar who wrote that “Muslims have something wrong in their mindset,” arguing that “Muslims should be more concerned about the genocide of Uighurs in China and less focused on hot-button issues like French cartoons mocking the Prophet Muhammad.”

The post featured two widely shared photographs of a Syrian toddler of Kurdish ethnicity who drowned attempting to reach Europe in 2015.

Facebook removed the post as anti-Muslim hate speech but the board allowed it, determining that it should be viewed as “commentary on the apparent inconsistency between Muslims’ reactions to events in France and in China.” The board found that while the post could be deemed offensive, “it did not reach the level of hate speech.”

In Case Decision 2020-003-FB-UA, the only board ruling in favor of Facebook so far, the body held a November 2020 post claiming Azerbaijanis “are nomads and have no history compared to Armenians” and that urged “an end to Azerbaijani aggression and vandalism,” was properly banned as hate speech.

The post was made during an armed conflict last year between Armenia and Azerbaijan. The user posted historical photographs described as showing churches in Baku, Azerbaijan. The Russian wording accompanying the photos claimed that Armenians built Baku and that this heritage, including the churches, was destroyed.