Meta Bans Political Ads Using Generative AI Amid Election Misinformation Concerns

Some U.S. lawmakers have expressed concerns about political ‘deepfakes’ amid the elections.
Meta Bans Political Ads Using Generative AI Amid Election Misinformation Concerns
A security guard stands watch by the Meta sign outside the headquarters of Facebook parent company Meta Platforms Inc in Mountain View, Calif., on Nov. 9, 2022. Peter DaSilva/Reuters
Caden Pearson
Updated:
0:00

Meta, the company that owns Facebook, has announced its decision to bar political campaigns and advertisers in regulated industries from utilizing its new generative AI advertising products.

This move, aimed at curbing the spread of election misinformation in the run-up to the 2024 U.S. presidential elections, was publicly disclosed through updates on the company’s help center on Monday night.

While Meta’s advertising standards already prohibit ads with debunked content, there were no specific rules concerning AI-generated content until now.

The company clarified that advertisers running campaigns related to housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, or financial services would not be permitted to use these generative AI features.

Meta’s note said that the company believes this approach will allow it to “better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries.”

This announcement comes roughly a month after Meta’s decision to expand advertisers’ access to AI-powered advertising tools, which include background generation, image expansion, and text variations.

Meta is the world’s second-biggest platform for digital ads, behind Alphabet, the owner of Google.

Meta’s introduced three new AI-powered tools for business: background generation, image expansion, and text variations. Background generation helps create multiple backgrounds for product images; image expansion adjusts creative assets for different platforms; and text variations generate various ad copy versions.

Initially, these AI tools were available to a select group of advertisers, but Meta plans to make them accessible to all advertisers worldwide by next year, per an Oct. 4 post.

These developments align with the broader trend of tech companies rushing to launch generative AI advertising products and virtual assistants in response to the rise of AI technologies like OpenAI’s ChatGPT, which provides human-like written responses to questions and other prompts.

Google has already said that it is taking steps to regulate AI-powered ads by restricting “political keywords” as prompts in its generative AI ad tools.

The company will require ads that contain “synthetic content that inauthentically depicts real or realistic-looking people or events” to include a disclosure note, with this policy update scheduled for mid-November.

Nick Clegg, Meta’s top policy executive, has stressed the need for tech companies to prepare for the potential misuse of generative AI in upcoming elections. He called for heightened scrutiny of election-related content “that moves from one platform to the other” and revealed Meta’s commitment to watermarking content generated by AI.

Meta committed this summer to developing a system to “watermark” content generated by AI. Mr. Clegg has previously said Meta is blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures.

TikTok and Snapchat both prohibit political ads, while X (formerly Twitter) has yet to introduce any generative AI ad tools.

Sen. Amy Klobuchar (D-Minn.) speaks during a press conference on lowering prescription drug prices in Washington, on July 13, 2023. (Madalina Vasiliu/The Epoch Times)
Sen. Amy Klobuchar (D-Minn.) speaks during a press conference on lowering prescription drug prices in Washington, on July 13, 2023. Madalina Vasiliu/The Epoch Times

Lawmakers Concerned About Deepfakes

After Google’s recent announcement that it would introduce requirements for disclosures on AI-generated content, several U.S. lawmakers are calling on other tech giants to adopt similar measures.

Chief among the lawmakers making those calls was Sen. Amy Klobuchar (D-Minn.), chair of the Senate Democratic Steering Committee, who hailed Meta’s announcement on Tuesday, suggesting that it should be mandated.

“This is a step in the right direction, but we can’t just rely on voluntary commitments,” Ms. Klobuchar wrote on X. “I’m working to implement guardrails so AI-manipulated ads don’t upend our elections.”

Ms. Klobuchar wrote to Meta founder Mark Zuckerberg in early October, asking what guardrails would be put in place to protect political figures from so-called “deepfakes.” Deepfakes are a form of video technology that creates computer-generated images that often are indistinguishable from real footage.

“With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platform—where voters often turn to learn about candidates and issues,” wrote Ms. Klobuchar and Rep. Yvette Clarke (D-N.Y.).

A phone displaying a statement from the head of security policy at Meta, in front of a screen displaying a deepfake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons, in Washington, on Jan. 30, 2023. (Olivier Douliery/AFP via Getty Images)
A phone displaying a statement from the head of security policy at Meta, in front of a screen displaying a deepfake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons, in Washington, on Jan. 30, 2023. Olivier Douliery/AFP via Getty Images

Ms. Clarke introduced a House bill earlier this year that would amend a federal election law to require labels when election ads contain AI-generated images or video. Ms. Klobuchar is sponsoring a companion bill in the Senate.

“I think that folks have a First Amendment right to put whatever content on social media platforms that they’re moved to place there,” Ms. Clarke said.  “All I’m saying is that you have to make sure that you put a disclaimer and make sure that the American people are aware that it’s fabricated.”

Another bipartisan bill, co-sponsored by Sen. Josh Hawley (R-Miss.), would go further in banning “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire.

In the 2024 election, AI-generated ads are already making waves. In April, the Republican National Committee aired an ad using AI to paint a speculative picture of the future under President Joe Biden’s potential re-election.

The ad featured fake yet convincing visuals, including boarded-up storefronts, military patrols, and immigrant-related turmoil.