Meta Ends Fact-Checking Program on Facebook: What to Know

The move ends a years-long trend toward more moderation.
Meta Ends Fact-Checking Program on Facebook: What to Know
Facebook CEO Mark Zuckerberg speaks at the Paley Center in New York City on Oct. 25, 2019. Mark Lennihan/AP Photo
Zachary Stieber
Updated:
0:00
Meta has ended its fact-checking program, among other major changes.
Here’s what to know.

‘Too Politically Biased’

Meta introduced the fact-checking program in 2016 after Donald Trump won that year’s presidential election.
After the election, CEO Mark Zuckerberg said he would work with “respected fact-checking organizations” to reduce misinformation on Facebook. Soon after, the fact-checking program was formally unveiled, with the company expressing the belief that “providing more context can help people decide for themselves what to trust and what to share.”

The organizations that were part of the program, including Snopes, were able to analyze posts. If they decided a post contained false information, it would either get flagged, with a fact-check note accompanying it, or removed.

“After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy,” Zuckerberg said in a video on Jan. 7. “We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.”
Joel Kaplan, another executive at the company, said in a statement that the program’s intention was to have independent experts give people more information about viral posts so that they could judge for themselves what they read.

“That’s not the way things played out, especially in the United States,” he wrote. “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact-check and how.

“Over time, we ended up with too much content being fact-checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.”

Shift Toward X Model

Meta has been copying some functions of social media platform X, formerly known as Twitter, in recent years. Meta’s Threads, first introduced as a video messaging application, was later retooled to serve as an X-style platform for short bursts of thought. Zuckerberg also rolled out a premium version following Elon Musk’s rollout of Twitter Blue.

Facebook’s latest move is replacing the fact-checking program with an X-style community notes feature drawn from users.

“We’re going to get rid of fact-checkers and replace them with community notes, similar to X,” Zuckerberg said.

Kaplan said X’s approach has worked.

“They empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see,” he said. “We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing—and one that’s less prone to bias.”

Musk applauded the development. “This is awesome,” he wrote on X.

Applying for Notes

Community Notes on X uses contributors, who offer assessments of certain posts that they feel offer misleading information.
To be attached to a post, a proposed note must be agreed upon by contributors “who have sometimes disagreed in their past ratings,” according to X. “This helps prevent one-sided ratings.”

Even some of Musk’s posts have received Community Notes.

X explicitly states that it doesn’t rely on professionals, but rather views regular people as valuable contributors.

Zuckerberg said in 2016, before the fact-checking program was introduced, that Facebook historically had relied on users to help the company understand what was fake and what was not, but that the problem had become so complex that working with fact-checking groups was necessary.

Executives are now going back to users, saying that the notes will rely on contributing users. And Meta’s program will also not control which notes show up, according to Kaplan. The notes that have agreement “between people with a range of perspectives” will appear in public.

Praise Comes—With Some Opposition

Some panned the development.
“Meta’s decision to end fact-checking will supercharge the misinformation that runs rampant on our digital platforms, further distorting our reality and undermining our democracy,” Sen. Michael Bennet (D-Colo.) said.
Angie Drobnic Holan, director of the International Fact-Checking Network, whose members worked on Meta’s program, said in a statement posted on X that the change “will hurt social media users who are looking for accurate, reliable information about their everyday lives and interactions with friends and families.”

On the other hand, Musk was not the only person to praise the changes.

“The First Amendment protects social media companies’ editorial choices about the content on their platforms,“ Ari Cohn, lead counsel for tech policy for the free speech group FIRE, told The Epoch Times in an emailed statement. “But it’s a good thing if platforms voluntarily try to reduce bias and arbitrariness when deciding what content to host—especially when they promise users a culture of free speech like Meta does.”

“A great day for freedom of speech!” Rep. Randy Weber (R-Texas) wrote on X. “It seems like Meta is finally taking a page from Elon Musk’s playbook and letting Americans make decisions for themselves.”
Zachary Stieber
Zachary Stieber
Senior Reporter
Zachary Stieber is a senior reporter for The Epoch Times based in Maryland. He covers U.S. and world news. Contact Zachary at [email protected]
twitter
truth