China, Iran, Russia Using AI to Influence US Elections, Intelligence Community Warns

Foreign adversaries are escalating efforts as the 2024 election approaches, reads a report from the national intelligence chief.
China, Iran, Russia Using AI to Influence US Elections, Intelligence Community Warns
In this photo illustration, social media applications are seen on a phone in New York City on March 13, 2024. Michael M. Santiago/Getty Images
Tom Ozimek
Updated:

Foreign powers are increasingly using artificial intelligence (AI) to influence how Americans vote, according to a new intelligence report released just 45 days ahead of the 2024 presidential election.

The Office of the Director of National Intelligence (ODNI) released a security update on Sept. 23 warning that China, Iran, and Russia are ramping up their respective influence efforts to shape public opinion in the United States using AI tools. These efforts are consistent with a previous warning issued by the Intelligence Community earlier in September, as foreign actors seek to exacerbate societal divisions and sway voters.

“These actors most likely judge that amplifying controversial issues and divisive rhetoric can serve their interests by making the United States and its democratic system appear weak and by keeping the U.S. government distracted with internal issues instead of pushing back on their hostile behavior in other parts of the world,” the Sept. 23 update reads.

Some of the foreign powers’ efforts include laundering deceptive content through prominent figures and releasing fabricated “leaks” intended to appear controversial. While AI has helped accelerate certain aspects of foreign influence operations targeting the United States, the Intelligence Community noted that AI has yet to revolutionize these tactics.

While China’s activity has been more general—seeking to shape global views of China and amplifying divisive U.S. political issues—Iran’s and Russia’s efforts have been using AI to create content more directly related to U.S. elections.

Russia has generated the most AI content related to the U.S. election, according to the Intelligence Community report. Moscow’s efforts span text, images, audio, and video media. The Kremlin’s actions involve spreading conspiratorial narratives and AI-generated content of prominent U.S. figures aimed at deepening divides on issues such as immigration.

Iran, on the other hand, has used AI to generate social media posts and inauthentic news articles for websites posing as legitimate news sources. Such content, appearing in both English and Spanish, has targeted American voters across the political spectrum, particularly focusing on divisive issues such as the Israel–Gaza conflict and the U.S. presidential candidates.

China’s use of AI in its influence operations is primarily focused on shaping perceptions of China rather than directly influencing the U.S. election, per the report. Chinese actors have also employed AI-generated content, such as fake news anchors and social media profiles, to amplify domestic U.S. political issues such as drug use, immigration, and abortion without explicitly backing any candidate.

In its Sept. 6 election security update, the Intelligence Community stated that, at the time, China was mostly focused on influencing down-ballot races and was not yet attempting to influence the presidential race.

The goal of the foreign actors remains more focused on influencing perceptions rather than directly interfering in the actual election process, the latest report states.

The report warns that adversaries will likely continue ramping up AI-driven disinformation campaigns as Election Day nears, posing a risk to U.S. democratic processes.

In March, The Epoch Times reported on the rising influence of political memes on election discourse. At the time, Pamela Rutledge, director of the Media Psychology Research Center, told The Epoch Times that deep fakes—which are realistic images, videos, and audio typically created by generative AI software—can and do effectively fool people.

Rutledge said that even if the content is obviously fake or of low quality, the messages can still be persuasive if they confirm people’s political biases.

A survey published in May by the Imagining the Digital Future Center at Elon University found that 78 percent of Americans believe that the presidential election will be influenced by “abuses” related to AI-generated content that spreads on social media.

“Many aren’t sure they can sort through the garbage they know will be polluting campaign-related content,” Lee Rainie, director of the Digital Future Center, said in a statement.

In a testament to the potential effects of AI-driven content on elections, 69 percent of survey respondents told the center they are not confident most voters can differentiate between fabricated and authentic photos, with similar percentages expressing concern about others’ ability to detect fake audio and video.

Austin Alonzo contributed to this report.
Tom Ozimek
Tom Ozimek
Reporter
Tom Ozimek is a senior reporter for The Epoch Times. He has a broad background in journalism, deposit insurance, marketing and communications, and adult education.
twitter