Foreign powers are increasingly using artificial intelligence (AI) to influence how Americans vote, according to a new intelligence report released just 45 days ahead of the 2024 presidential election.
“These actors most likely judge that amplifying controversial issues and divisive rhetoric can serve their interests by making the United States and its democratic system appear weak and by keeping the U.S. government distracted with internal issues instead of pushing back on their hostile behavior in other parts of the world,” the Sept. 23 update reads.
Some of the foreign powers’ efforts include laundering deceptive content through prominent figures and releasing fabricated “leaks” intended to appear controversial. While AI has helped accelerate certain aspects of foreign influence operations targeting the United States, the Intelligence Community noted that AI has yet to revolutionize these tactics.
While China’s activity has been more general—seeking to shape global views of China and amplifying divisive U.S. political issues—Iran’s and Russia’s efforts have been using AI to create content more directly related to U.S. elections.
Russia has generated the most AI content related to the U.S. election, according to the Intelligence Community report. Moscow’s efforts span text, images, audio, and video media. The Kremlin’s actions involve spreading conspiratorial narratives and AI-generated content of prominent U.S. figures aimed at deepening divides on issues such as immigration.
Iran, on the other hand, has used AI to generate social media posts and inauthentic news articles for websites posing as legitimate news sources. Such content, appearing in both English and Spanish, has targeted American voters across the political spectrum, particularly focusing on divisive issues such as the Israel–Gaza conflict and the U.S. presidential candidates.
China’s use of AI in its influence operations is primarily focused on shaping perceptions of China rather than directly influencing the U.S. election, per the report. Chinese actors have also employed AI-generated content, such as fake news anchors and social media profiles, to amplify domestic U.S. political issues such as drug use, immigration, and abortion without explicitly backing any candidate.
In its Sept. 6 election security update, the Intelligence Community stated that, at the time, China was mostly focused on influencing down-ballot races and was not yet attempting to influence the presidential race.
The goal of the foreign actors remains more focused on influencing perceptions rather than directly interfering in the actual election process, the latest report states.
The report warns that adversaries will likely continue ramping up AI-driven disinformation campaigns as Election Day nears, posing a risk to U.S. democratic processes.
Rutledge said that even if the content is obviously fake or of low quality, the messages can still be persuasive if they confirm people’s political biases.
“Many aren’t sure they can sort through the garbage they know will be polluting campaign-related content,” Lee Rainie, director of the Digital Future Center, said in a statement.
In a testament to the potential effects of AI-driven content on elections, 69 percent of survey respondents told the center they are not confident most voters can differentiate between fabricated and authentic photos, with similar percentages expressing concern about others’ ability to detect fake audio and video.