CCP Is Impersonating Americans to Influence 2024 Elections

CCP Is Impersonating Americans to Influence 2024 Elections
Illustration by The Epoch Times Getty Images, Shutterstock
Updated:

Harlan Report appeared to be a startup news program like many others. Its bio on TikTok promised to make American media great again.

“No opinions, just facts,” it stated.

Like so many insurgent media profiles, the videos posted by Harlan seemed genuinely aimed at exposing government corruption and pushing back against an otherwise left-wing-dominated mediascape.

That much was clear when a video Harlan shared went viral, earning more than 1.5 million views. It claimed to show President Joe Biden making a sexual remark at the annual NATO Summit in Washington.

But something was off.

The transcript used in the video was wrong, and Biden never said what was claimed.

There were also other red flags.

The owner of the Harlan Report account originally claimed to be a U.S. military veteran who had lost faith in Biden. Soon after, the owner claimed to be a 29-year-old Trump supporter in New York. Months later, the owner claimed to be a 31-year-old Republican social media influencer from Florida.

Then the account’s handle was changed to “Harlan_RNC,” insinuating an official link to the Republican Party.

But Harlan was neither a legitimate news source nor run by an American citizen.

According to the findings of a report released last month by Graphika, a social network analysis company, Harlan Report was one of thousands of accounts linked to the world’s largest online influence operation.

That operation, dubbed “Spamouflage,” is a state-backed campaign from communist China with links to Chinese law enforcement.

Unlike the Harlan Report, most of Spamouflage’s efforts are not focused on targeting American conservatives but on amplifying existing criticisms toward American society and government at large.

There are other accounts that create similar content but tailored for Democrats and others that aim to anger and polarize independents, further disenfranchising them from the political process altogether.

Some have impersonated American anti-war activists, sharing memes calling former President Donald Trump a “fraud” and showing him in an orange prison uniform. Others question the legitimacy of Biden’s presidency.

What makes the Harlan Report persona unique is its success in finding a following and its pioneering role in targeting a niche audience the same way any advertiser would.

image-5743320
The TikTok app is displayed on a phone. Harlan Report is a user on TikTok. Drew Angerer/Getty Images

Now, security leaders are concerned that the Chinese Communist Party (CCP) will learn from its successes and continue to deploy Harlan-type social media profiles.

It is an issue that the Congressional Select Committee on Strategic Competition with the CCP is aware of and is now pressuring social media companies to take more seriously.

“It’s no surprise the CCP is now using fraudulent social media accounts to target our upcoming elections,” committee chair Rep. John Moolenaar (R-Mich.) said in a statement shared with The Epoch Times.

“We encourage social media companies to expose the CCP’s propaganda campaign and take action against CCP bots that are trying to deceive Americans.”

China’s Targeting Tactics

Foreign attempts to influence U.S. elections are nothing new, but their increasing stridency and varying levels of success are.
China, Iran, and Russia are all currently engaged in influence operations aiming to interfere in the 2024 elections, according to a report published in August by cybersecurity company Recorded Future.

That report found that Chinese state-backed actors are “amplifying content highlighting polarizing domestic issues”—including issues related to Black Lives Matter, school campus protests, and U.S. foreign policy toward Israel and Ukraine—to sow discord between Americans.

Iranian-backed actors have targeted Trump’s reelection campaign, attempting to gain access to its inner circle.
Russian-backed influence operations have attempted to discredit the Democratic presidential ticket by spreading fabricated stories and images about Vice President Kamala Harris.

The report found that Chinese influence operations, including Spamouflage, have historically failed to generate traction among American audiences but are now seeing sporadic breakthrough success with viral content.

Those breakthroughs, in large part, are due to the increasing use of artificial intelligence (AI) and deepfakes, which the operators behind Spamouflage use to play on the likes and dislikes of a target audience.

John Mills, who previously served as the director of cybersecurity policy at the Defense Department, told The Epoch Times that the CCP is using AI to sort and interpret user data to better exploit users’ fears and desires.

“People don’t understand the immense power of big data, big data analytics, and the AI component that China has mastered and is using on an unbelievable scale,” Mills said.

image-5743322
A specialist creates a demonstration video using artificial intelligence to make digital replicas of people who have died, on his laptop in Jiangyin, Jiangsu Province, China. Hector Retamal/AFP via Getty Images

“They [the CCP] are delivering a data stream tailored and customized to that individual, knowing their likes, their dislikes, their trigger points.”

An unclassified memo on election security published by the Office of the Director of National Intelligence (ODNI) in July found that the Chinese regime “is seeking to expand its ability to collect and monitor data on U.S. social media platforms, probably to better understand—and eventually manipulate—public opinion.”

Mills said such data would help the CCP dial in information about social media users’ positive and negative interactions. Those operations could then attempt to trigger mass distrust or hysteria over real or faked events.

“This is psychological operations 101: knowing your target audience, knowing their trigger points, and that’s what they’re doing with Spamouflage on a breathtaking, unbelievable scale and creating these fake accounts,” Mills said.

image-5743327
Last year, Meta, which first characterized Spamouflage as the world’s largest online influence operation, stated that China created 4,800 fake social media accounts posing as Americans.

In most of those cases, the accounts did not start by spreading fake content. Instead, they reshared posts created by real politicians and media outlets from both liberal and conservative sources to build followings and amplify divisive content.

As those followings grew, the profiles changed, both in who they claimed to be and in the type of content they delivered.

Mills said the technique used to identify and exploit Americans was essentially a new iteration of the same type of profiling that big tech corporations have used for years to track consumer preferences.

“When I’m looking for a trailer hitch [online], that commercial for a trailer hitch follows me wherever I go,” he said.

“Now, China has taken what our big tech was doing, but they’re doing it on a much grander scale, with a much more sinister agenda, and without any semblance of bumper cushions or guardrails.”

image-5743321
A pedestrian walks in front of the Meta logo at the Facebook headquarters in Menlo Park, Calif., on Oct. 28, 2021. Justin Sullivan/Getty Images

Conflicting Ideas About China’s Goals

The Department of Homeland Security’s 2025 Homeland Threat Assessment, published Oct. 2, anticipated that foreign use of “subversive tactics in an effort to stroke discord and undermine confidence in U.S. domestic institutions” would increase.

Recently, officials from the ODNI have delivered statements to the press to assert that Russian cyber actors are attempting to elect Trump and undermine Harris.

Yet the ODNI’s most recent election security fact sheet claims that China “probably does not plan to influence the outcome” of the U.S. election.

Mills said he believes that the CCP is “trying to influence the election” to ensure the election of a candidate who would be less effective at countering its quest for global hegemony.

“What is the Chinese agenda? I think, as opposed to the Russians, who just want to create hate and discontent writ large ... it is election interference,” he said.

One of the ODNI’s reports last year revealed that the CCP was more willing to interfere in U.S. elections now than in previous cycles precisely because it did not believe that the Biden administration would retaliate.

‘No Such Thing as Guardrails’

Just as interpretations of China’s motives have remained murky, so have the various government agencies responsible for defending Americans from such operations, which have largely failed to provide any official guidance for how everyday Americans should identify and respond to such content.
In April, Cybersecurity and Infrastructure Security Agency (CISA) senior adviser Cait Conley said the agency was ready to help stave off the threat of foreign influence operations, particularly in the 2024 election cycle.

“The elections process is the golden thread of American democracy, which is why our foreign adversaries deliberately target our elections infrastructure with their influence operations,” Conley said in a statement.

“CISA is committed to doing its part to ensure these officials—and the American public—don’t have to fight this battle alone.”

When asked what Americans can do to identify and counter foreign influence operations, CISA declined to comment and instead referred The Epoch Times to the ODNI.

The ODNI did not return multiple requests for comment on the matter.

When asked what actions the State Department was taking to address foreign influence in U.S. elections, a department spokesperson said that it was “focused on the information environment overseas.”

The Epoch Times has also requested comment from the Department of Homeland Security.

CISA subsequently released a public service announcement to highlight “efforts by foreign actors to spread disinformation in the lead-up to the 2024 U.S. general election with the goal of casting doubt on the integrity of the democratic process and sowing partisan discord.”

The notice, issued on Oct. 18, encourages Americans to verify anything they see online and to seek out “reliable sources” including state and local government sources.

image-5743323
(L–R) Moderator Jonathan Luff, chief of staff and chief of global affairs at Recorded Future, Lisa Einstein, chief AI officer of the Cybersecurity and Infrastructure Security Agency (CISA), Jennifer Bachus, principal deputy assistant secretary for the State Department's Bureau of Cyberspace and Digital Policy, and Michael Duffy, acting federal chief information security officer for the Office of Management and Budget participate in a discussion on “Strengthening National Security Through AI” during the Predict2024 Conference in Washington on Oct. 9, 2024. Kent Nishimura/Getty Images

Graphika, whose report does not offer any suggestions on identifying or countering the content examined, declined to comment.

Recorded Future also did not return a request for comment. In a report published in September, however, the company suggested that the response to deepfakes should be left to the entities concerned about reputational damage, whom it encouraged to cooperate with “fact-checkers, social media platforms, and media outlets.”

That’s a real problem, given the increasing reach of foreign influence campaigns, which, according to the Recorded Future report, frequently aim to mislead the public and engage in electioneering.

Similarly, according to research cited in the same report, most people cannot detect deepfakes and would benefit from guidance on the topic.

According to research published in the Journal of Cybersecurity Education, Research and Practice, most people are simply unable to identify deepfake videos of people with whom they are unfamiliar, and nearly 30 percent of people are unable to distinguish deepfakes of people they are familiar with.

Even if a person has identified a deepfake video for what it is, they may still be influenced by it, particularly if it promotes an extreme belief or action.

Research published in the academic journal Computers in Human Behavior found that “false information can have an effect on people’s political beliefs, even after retraction.”

“Even when people are aware that certain information may not be true, it still impacts their beliefs and actions,” the report reads.

“In other words, even implausible disinformation can influence political beliefs, partially beyond the awareness of recipients.”

The prevalence of deepfakes in foreign influence operations could, therefore, engender a long-term dislike or distrust among American voters for candidates, even after those Americans find out the information was not real.

When asked what advice he would give Americans, Mills said: “You should be very, very suspect of anything you see online.

“There’s no such thing as guardrails with what China is up to.”

AD