China, Russia, and Iran are “very likely” to use artificial intelligence tools to attempt to interfere in Canada’s general election this year, with Beijing being the most likely to generate fake content and launch targeted propaganda campaigns aimed at spreading disinformation among Canadian voters, says one of Canada’s key security and intelligence agencies.
Canada is particularly vulnerable, as the majority of its citizens receive their news and information from the internet or social media, thus “increasing their exposure to AI-enabled malign influence campaigns,” the CSE said. Meanwhile, data from Canadians, as well as public and political organizations, can be mined from online sources, enabling foreign actors to create fake content and craft tailored propaganda campaigns.
“We assess that the PRC, Russia, and Iran will very likely use AI-enabled tools to attempt to interfere with Canada’s democratic process before and during the 2025 election,” reads the report.
“When targeting Canadian elections, threat actors are most likely to use generative AI as a means of creating and spreading disinformation, designed to sow division among Canadians and push narratives conducive to the interests of foreign states.”
China’s Threat to Canadian Elections
The People’s Republic of China (PRC) is the most likely foreign actor to target Canadian elections, says the agency, while Russia and Iran “almost certainly” view Canadian elections as lower-priority targets compared to elections in the United States and UK. If Russia or Iran do target Canada, “they are more likely to use low-effort cyber or influence operations.”The agency cites as an example the 2021 Canadian general election, in which actors likely or known to be affiliated with the Chinese regime spread non-AI-enabled disinformation about politicians running for office whom they deemed to be “anti-PRC.”
Two years later, a propaganda campaign called “Spamouflage Dragon,” likely linked to China, spread disinformation targeting dozens of MPs, including Prime Minister Justin Trudeau, Conservative Leader Pierre Poilievre, and several cabinet members, the report notes, adding that the network has previously used generative AI to target Mandarin-speaking figures in Canada.
Another key finding of the report is that some foreign nation states, particularly the PRC, are undertaking “massive data collection campaigns” targeting “democratic politicians, public figures, and citizens around the world.”
The risk grows when advances in predictive AI allow those states to “quickly query and analyze these data,” the report says, as it enables them to improve their understanding of political environments in democratic countries. Predictive AI tools, instead of producing new content, are designed to analyzed data by recognizing patterns in the data.
“By possessing detailed profiles of key targets, social networks, and voter psychographics, threat actors are almost certainly enhancing their capabilities to conduct targeted influence and espionage campaigns,” the report reads.
The report notes that it is “likely” the Chinese regime has used social media platform TikTok to promote pro-PRC narratives in democratic countries and to censor narratives it identifies as anti-PRC. It adds that operations aimed at impacting “user beliefs and behaviours on a massive scale” are “likely” to have targeted voters ahead of an election on at least one occasion. TikTok is owned by PRC-based company ByteDance.
Election Integrity
While it’s “very unlikely” that disinformation or AI-enabled cyber activity would undermine the integrity of Canada’s upcoming general election, ongoing AI advancement and the growing proficiency of cyber adversaries in using these technologies mean “the threat against future Canadian general elections is likely to increase,” the CSE said in its report.Although Canada conducts its general elections by paper ballot, much of the electoral infrastructure is digitized, including voter registration systems, election websites, and communications within election management bodies, making those systems vulnerable to malicious cyber activity.
“Cyber actors can use generative AI to quickly create targeted and convincing phishing emails, potentially allowing them illicit entry to this infrastructure, where they can install malware or exfiltrate and expose sensitive information,” says the agency.
The report cites a case from last July, in which Chinese regime-affiliated hackers gained access to UK electoral registries with names and addresses of everyone registered to vote between 2014 and 2021, according to the UK government. “AI-enabled cyber actors can use data such as this to develop propaganda campaigns tailored to specific audiences,” the CSE report says.