A group of 20 Big Tech companies pledged to “prevent deceptive” use of artificial intelligence (AI) and track down its creators as the United States and other countries head into elections in 2024.
Deepfakes of political candidates, election officials, and “other key stakeholders” in elections this year will be under the microscopes of Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic, and X (formerly Twitter).
The specific focus of the pact is on AI-generated audio, video, and images designed to deceive voters and manipulate election processes. The companies will “work collaboratively” to build on each of their existing efforts in this arena, according to a news release.
The accord comes as over four billion people in more than 40 countries are expected to participate in elections in 2024, the news release noted.
The Tech Accord outlines a set of commitments, including that the companies will work together to create tools that detect “and address” the use of deepfakes—convincing AI-generated audio, video, and images.
Content that provides false information to voters about the time, location, and how they can vote will also be targeted by the companies.
MSC Chair Christoph Heusgen, who described elections as the “beating heart” of democracies, touted the “Tech Accord” as a “crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.”
“MSC is proud to offer a platform for technology companies to take steps toward reining in threats emanating from AI while employing it for democratic good at the same time,” he said in a statement.
The participating companies have committed to eight actions in the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.”
These actions range from working together to develop technology to detect and address deepfakes, to mitigating risks and fostering cross-industry resilience, to providing transparency to the public regarding their efforts.
The accord also acknowledges the need for continued engagement with diverse global civil society organizations and academics.
“Amazon is committed to upholding democracy and the Munich Accord complements our existing efforts to build and deploy new AI technologies that are reliable, secure, and safe,” said David Zapolsky, senior vice president of global public policy and general counsel at Amazon.
Dana Rao, general counsel and chief trust officer at Adobe, highlighted the importance of transparency in building trust.
“That’s why we’re excited to see this effort to build the infrastructure we need to provide context for the content consumers are seeing online,” she said in a statement.
The Big Tech pact is just “one important step to safeguard online communities against harmful AI content,” the news release noted. Governments and other aspects of society will be needed in these efforts, according to a Meta executive.
Nick Clegg, the president of global affairs at Meta, said, “With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content.”
“This work is bigger than any one company and will require a huge effort across industry, government, and civil society,” he added. “Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge.”
There has been an increase in concerns regarding deepfake technology, especially after a robocall pretending to be President Joe Biden was used to discourage people from voting in New Hampshire’s primary election. The call’s source was traced back to a company in Texas. This incident has prompted the Federal Communications Commission to take action against AI-generated robocalls.
The Munich Security Conference is being held in Germany from Feb. 16 to 18. Attendees include world leaders and security officials.