Researchers from Rutgers University and the school’s Network Contagion Research Institute (NCRI) found that TikTok’s algorithms “actively suppress content critical of the Chinese Communist Party (CCP) while simultaneously boosting pro-China propaganda and promoting distracting, irrelevant content,” the study states.
“Through the use of travel influencers, frontier lifestyle accounts, and other CCP-linked content creators, the platform systematically shouts down sensitive discussions about issues like ethnic genocide and human rights abuses,” it reads.
The report also found that TikTok had carried out “successful indoctrination” of its users, particularly heavy users, given changes in their attitudes toward China, based on the results of a psychological survey.
“These users, through targeting or information environments engineered to sublimate free speech, appear to absorb these biased narratives unwittingly, leading to a distorted understanding of critical global issues,” the researchers wrote.
A TikTok spokesperson responded to the findings, telling The Epoch Times by email that the study was a “non-peer-reviewed, flawed experiment ... clearly engineered to reach a false, predetermined conclusion.”
Study
To conduct the study, researchers created 24 accounts across TikTok, Instagram, and YouTube, mimicking 16-year-old users in the United States. The accounts were used to test the three social media platforms’ algorithms when inputting four different search keywords often mentioned along with the CCP’s human rights abuses—“Uyghur,” “Xinjiang,” “Tibet,” and “Tiananmen.”Researchers collected more than 3,400 videos from their search results using the four keywords and classified each video as either “pro-China,” “anti-China,” “neutral” or “irrelevant.”
In terms of Xinjiang, a “pro-China” video could include showing minorities’ folk customs or idyllic portrayals of rural life, and an “anti-China” video could show the Uyghurs’s plight in China or calls for boycotting products made in Xinjiang, according to the report.
Only 2.3 percent of search results for “Xinjiang” on TikTok were considered “anti-China,” in comparison to 21.7 percent on YouTube and 17.3 percent on Instagram, according to the report.
More than 26 percent of search results for “Tiananmen” on TikTok were considered “pro-China,” while only 7.7 percent of research results on YouTube were “pro-China” and 16.3 percent on Instagram, according to the report.
According to the report, a “pro-China” video on Tiananmen could be “denials of the massacre and revisionist historical takes” or “scenic pictures of the square that bear no mention of the massacre.”
In terms of search results using the word “Tibet,” TikTok contained the least amount of anti-China content (5 percent) and the largest amount of pro-China content (30.1 percent) across the three platforms, according to the report.
The report explained that a “pro-China” video on Tibet could “echo the CCP narratives that Tibet has been liberated,” while an “anti-China” video could be about footage of Tibetan protests or “details of Tibetan cultural erasure by the CCP.”
Survey
Researchers also surveyed 1,214 American TikTok users, seeking to understand their perception of China based on the time they spent on the app.Heavy users of TikTok—those using the app for more than three hours per day—showed a 49 percent increase in positivity toward the CCP’s human rights records relative to nonusers. For those using the app for 15 minutes to three hours, the percent increase was 36 percent.
“By contrast, use of YouTube and Instagram showed no significant relationship on users’ perception of China’s human rights record,” the report reads.
“This suggests that TikTok’s content may contribute to psychological manipulation of users, aligning with the CCP’s strategic objective of shaping favorable perceptions among young audiences.”
Heavy TikTok users also showed a 48 percent increase in the perception that “Tiananmen Square is mostly known as a tourist site.”
Researchers recommended the creation of a Civic Trust funded by social media platforms and the public to help identify platforms that are manipulating user perceptions.
“If social media algorithms are found to be subverting the very democracies that provide them the freedom to operate, they are both unjust and dangerous,” the report reads. “There must be accountability and corrective measures to ensure that platforms are not exploited by state actors to erode democratic institutions and values.”