Chinese Military Aims to Use AI to Boost Disinformation Campaigns: Think Tank

If the CCP already began working on the technologies since 2023, it would be ready to target the upcoming U.S. presidential elections, researchers say.
Chinese Military Aims to Use AI to Boost Disinformation Campaigns: Think Tank
Facebook, Instagram, and Twitter apps are seen on the screen of an iPhone. Photo Illustration by Justin Sullivan/Getty Images
Catherine Yang
Updated:
0:00

A new report from RAND found that the Chinese military has embraced the use of artificial intelligence (AI) to further foreign influence campaigns.

The United States and other countries “should prepare for this AI-driven social media manipulation” by adopting risk reduction measures, promoting media literacy and government trustworthiness, increasing public reporting, and increasing diplomatic coordination, according to the study, published Oct. 1.

It’s the first report focused on the planning and strategies behind the Chinese Communist Party’s (CCP) social media influence campaigns.

Researchers investigated Li Bicheng, a Chinese military-affiliated researcher and leading expert on mass social media manipulation who is believed to be at least partly responsible for the regime’s adoption of these technologies. The study also pulled evidence from over 220 Chinese language articles from academic journals and more than 20 English language articles from international conferences written by Li.

Among the key findings are that the CCP began developing social media manipulation capabilities in the 2010s, is “clearly interested in leveraging AI” for these campaigns, that Chinese military researchers are conducting cutting-edge work, and that the CCP is well-positioned to run these large-scale manipulation campaigns.

The researchers note that the CCP’s AI activities run counter to the regime’s public statements, in which officials have overtly stated their opposition to using AI for disinformation.

Adopting Social Media

According to the researchers, the CCP’s initial response to social media-fueled uprisings, such as the Arab Spring, was to crack down on the technology.

But it also took an interest in what it saw as Western uses of “online psychological warfare,” and in 2013 released planning documents for the CCP to “strengthen international communications capabilities and construct foreign discourse power.”

CCP leader Xi Jinping gave a speech that year calling for “launching a public opinion struggle” and aiming to “build a strong cyber army.”

He said China is a victim of accusations coming from the West and said retaliation was justified.

“We must meticulously and properly conduct external propaganda” and innovate, Xi said.

The next few years saw the creation of new departments focused on online propaganda across several CCP bodies, as well as more cross-department collaboration in foreign influence campaigns.

By 2017–2018, large-scale, state-sponsored efforts were actively targeting foreign groups. Researchers believe the disinformation campaign “Spamouflage,” which was identified as a CCP-led initiative in 2023, began during this period. Taiwan had also in 2018 accused the Chinese military of social media manipulation to interfere with Taiwan elections, the first such public accusation.
Social media disinformation campaigns “exploded” by 2019, according to the report, and were used extensively during the Hong Kong protests, then the COVID-19 pandemic, and later the 2022 U.S. midterm elections.

Testing Ground for AI

In 2023, Li Bicheng wrote in a study that artificial social media posting was still inefficient, and the regime still needed to rely on human labor in these efforts. He had been researching ways to utilize AI to replace “50 cent army” social media users, believing this would help the CCP achieve “the public opinion advantage in future informationized wars.”

Li laid out a six-step process: discover and acquire key information; prepare and select appropriate media carriers; produce tailored content for each of the targeted online platforms; select appropriate timing, delivery mode, and steps; strengthen dissemination across multiple sources by forming “hot spots”; and further shape the environment and expand influence.

His research in recent years has been a broad effort to pull together various technologies that could automate these several steps, according to the researchers, to create intelligent posts that can be automatically deployed on social media in the most effective way, personalized to target audiences.

CCP-sponsored researchers are also developing a simulated environment, or “supernetwork,” to test these AI capabilities—to see whether the AI-generated content has the desired effect of swaying public opinion in the intended direction.

“The simulated environment leverages existing real-world data, cognitive science, and network modeling,” the report reads.

“We argue that current LLM [large language mode] technology is sufficient to conduct Li’s proposed automated public opinion guidance system; China likely has the capacity to operationalize the system.”

If the CCP already began working on the technologies discussed in the 2023 studies, it would be ready and able to target the upcoming presidential elections, researchers say.

Some evidence of CCP-linked, AI-generated disinformation campaigns already surfaced in 2023, according to the report, such as AI-generated images of the Hawaii wildfire and an influence campaign on YouTube that pushed pro-China and anti-U.S. narratives in topics including a “U.S.–China tech war” and geopolitics.

Think tank Australian Strategic Policy Institute concluded that the campaign, which amassed some 120 million views on YouTube by December 2023, was “one of the most successful influence operations related to China ever witnessed on social media.”

The RAND researchers expect to see an increase in social media bots making use of generative AI going forward and recommended U.S. officials and social media platforms to reduce risk by investing in ways to detect, and require labeling of, AI-generated content.

The findings of the study do not capture the scale of the CCP’s digital influence operations, researchers noted.

“It is important to note that social media manipulation is only one of many tools within the CCP’s broader foreign influence operations toolkit and that the Chinese military is only one of likely many actors conducting these activities on behalf of the Party-state,” the report reads.