Meta Developing AI Bots to ‘Fact-Check’ Wikipedia Entries

Meta Developing AI Bots to ‘Fact-Check’ Wikipedia Entries
Facebook CEO Mark Zuckerberg makes the keynote speech at F8, Facebook's developer conference, in San Jose, Calif., on May 1, 2018. AP Photo/Marcio Jose Sanchez
Naveen Athrappully
Updated:
0:00

Meta, the parent company of Facebook, has developed the world’s first artificial intelligence (AI) model that’s capable of automatically verifying hundreds of thousands of citations at once, a feature the firm says could help make entries on Wikipedia “more accurate.”

Researchers designed the AI in a way that it can find appropriate sources for a claim from among millions of web pages, according to a Meta blog post on July 11. Algorithms were fed with 4 million claims from Wikipedia, which taught them to zero in on a single source from the vast pool of web pages to validate every single statement.

The model ranks the cited source and also lists out potential alternative sources that might support the claim. A human editor can then review the AI-supplied citation for approval. According to Meta, the AI model will “bolster” the quality of knowledge in Wikipedia.

Fabio Petroni, research tech lead manager for the FAIR (Fundamental AI Research) team of Meta AI, told Digital Trends that the company currently has only a proof of concept and that the tech isn’t “really usable” yet. In addition, Meta clarified in a note that it hasn’t partnered with Wikipedia on the project.

That the AI is being developed by Meta, which is widely known to censor conservative content, while promoting anti-conservative content on its platform, has raised eyebrows.

Last year, for example, an employee at a firm hired by Facebook for content moderation revealed that the social platform provided guidance that was specifically focused against white people and conservatives.

For instance, Facebook insisted on making a “newsworthy exception” when a CNN anchor labeled white men as terror threats. Similarly, the guidance said it was OK to attack white men who don’t support LGBT causes.

Members of a COVID-19 vaccine injury group are now being forced to use codewords and avoid terms such as “vaccine,” “injury,” and “Pfizer” to avoid being flagged or deleted when communicating on Facebook.

Wikipedia’s Leftist Bias

There also is concern that the idea of an AI “fact-checking” Wikipedia might strengthen the platform’s leftist bias.

Last year, Wikipedia co-founder Larry Sanger acknowledged that Wikipedia has slipped into “leftist propaganda,” moving away from its neutral stance. Sanger left Wikipedia 20 years ago because he wasn’t happy with the direction of the platform.

While speaking to EpochTV’s “American Thought Leaders,” Sanger pointed out that in the previous five years, any individual who is contrarian or seen to be a conservative has often found a Wikipedia article on themselves that “grossly misrepresents their achievements, often just leaves out important bits of their work, and misrepresents their motives.”

In April, it was found that editors at Wikipedia deleted an entry on President Joe Biden’s son Hunter Biden’s investment and advisory firm Rosemont Seneca Partners. The business, founded in 2009, came under congressional scrutiny due to the younger Biden’s overseas dealings.

A 2018 analysis of Wikipedia found that nine of the top 10 outlets cited on the platform’s articles were those with a left-wing bias, including The New York Times, BBC News, and The Guardian.

Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.
Related Topics