AI Deepfake ‘News Anchors’ Used in Pro-China Videos on Social Media: Report

AI Deepfake ‘News Anchors’ Used in Pro-China Videos on Social Media: Report
A screen shows an artificial intelligence (AI) news anchor from a state-controlled news broadcaster, in Beijing, on Nov. 9, 2018. Nicolas Asfouri/AFP via Getty Images
Katabella Roberts
Updated:
0:00

Artificial intelligence-generated deepfake news anchors are being used by Chinese state-aligned actors to promote pro-China propaganda videos on social media, according to a report published on Feb. 7.

The detailed report (pdf) by U.S.-based research firm Graphika marks the first time it has observed “state-aligned influence operation actors using video footage of AI-generated fictitious people in their operations.”

Graphika found that the fake news anchors were created for a likely fictitious news outlet called “Wolf News,” which it claims utilized technology provided by London-based AI video company Synthesia.

According to Graphika, the videos were discovered while the company was tracking a network of pro-China disinformation operations that it dubbed “Spamouflage.”

“This set of two unique videos shared many of the same characteristics as traditional Spamouflage content: they ranged between one-and-a-half and three minutes in length, used a compilation of stock images and news footage from online sources, and were accompanied by robotic English-language voiceovers promoting the interests of the Chinese Communist Party,” Graphika wrote in its analysis.

One such video accused the U.S. government of attempting to tackle gun violence through “hypocritical repetition of empty rhetoric.” The other stressed the importance of cooperation between the United States and China for the recovery of the global economy.

Videos Were ‘Low Quality’

Graphika said it identified “Spamouflage” promoting the deepfakes on platforms including Twitter, Facebook, and YouTube, but the videos were low quality and “spammy,” and none of them had received more than 300 views.

China hasn’t commented on the report.

The website of Synthesia states that it’s an “AI video creation platform” used by thousands of companies to “create videos in 120 languages.” The company offers users more than 100 different “AI avatars,” including two named “Anna” and “Jason.”

“As a company pioneering this new kind of media,” Synthesia says it’s aware of the responsibility it has and that AI and similarly powerful technologies “cannot be built with ethics as an afterthought.”

“We will not offer our software for public use. All content will go through an explicit internal screening process before being released to our trusted clients,” the website states.

It also states that “political, sexual, personal, criminal and discriminatory content is not tolerated or approved.”

Victor Riparbelli, Synthesia’s co-founder and CEO, told The Japan Times that consumers who used the company’s technology to create the avatars highlighted in the Graphika report had violated its terms of service.
He said the accounts of those who had done so have since been suspended and that he takes “full responsibility for anything that happens” on the platform, but declined to provide further details regarding the individual or individuals behind the Wolf News videos.

Beijing’s New Law

Riparbelli said that the company has a four-person team dedicated to preventing its deepfake technology from being used to create illicit content but noted that certain materials containing misinformation are hard to detect if they don’t include things such as outright hate speech or explicit words and imagery.

“It’s very difficult to ascertain that this is misinformation,” Riparbelli said after being shown one of the Wolf News videos, according to the publication. The CEO also urged policymakers to set clearer rules about how the AI tools could be used.

Riparbelli told The Epoch Times in an emailed statement: “We have strict guidelines for which type of content we allow to be created on our platform and we deeply condemn any misuse.

“The recent videos that emerged are in breach of our ToS [terms of service] and we have identified and banned the user in question.”

Riparbelli added that the company only creates AI avatars with “explicit consent” and is working with the leading industry bodies to prevent misuse.

“No system will ever be perfect, but to avoid similar situations arising in future we will continue our work towards improving systems,” he said.

Graphika’s report comes shortly after Beijing adopted an expansive new law to regulate deepfakes called the “Provisions on the Administration of Deep Synthesis of Internet Information Services,” which took effect in January.

Under the regulations, deep synthesis providers must, among other things, establish and maintain systems for user registration and verification of user identity and provide reviews and ethical evaluations of the deepfake services and the algorithms used by the system.

The law also states that deep synthesis providers also must implement procedures to notify and take down the publication of “false, illegal, or harmful information” by deep synthesis users.

Washington has repeatedly raised concerns over the implications for the United States’ overall economic competitiveness and national security because of China’s AI advancements.

Katabella Roberts
Katabella Roberts
Author
Katabella Roberts is a news writer for The Epoch Times, focusing primarily on the United States, world, and business news.
Related Topics