FTC Unveils New Measures to Combat AI Impersonation, Seeking Public Comment

The commission has now proposed expanding the rule to prohibit the impersonation of individuals.
FTC Unveils New Measures to Combat AI Impersonation, Seeking Public Comment
The Federal Trade Commission building is seen in Washington on March 4, 2012. Gary Cameron/Reuters
Aldgra Fredly
Updated:
0:00

The U.S. Federal Trade Commission (FTC) on Feb. 15 proposed modifying a rule that currently bans the impersonation of government and businesses also to include a ban on the impersonation of individuals.

The FTC said it had finalized the rule allowing the commission to file federal court cases to force scammers to return the money they made from impersonating government and businesses.

The commission has now proposed expanding the rule to prohibit the impersonation of individuals.

In a statement, FTC Chair Lina Khan said that the proposed rule changes would strengthen the commission’s toolkit “to address AI-enabled scams impersonating individuals.”

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” Ms. Khan said.

“With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” she added.

The proposed rule changes follow “surging complaints” around impersonation fraud and “public outcry” about the harms caused to consumers and to impersonated individuals, according to the FTC.

“Emerging technology–including AI-generated deepfakes–threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud,” it stated.

The FTC said it is seeking public input on whether the revised rule should include provisions prohibiting the use of AI platforms for impersonation.

“As scammers find new ways to defraud consumers, including through AI-generated deepfakes, this proposal will help the agency deter fraud and secure redress for harmed consumers,” it stated.

According to the FTC, government and business impersonation frauds have cost consumers “billions of dollars” in recent years, and both categories saw “significant increases” last year.

This photo illustration created in Washington, on November 17, 2023, shows a phone screen showing a social media video marked as an "altered video" in front of a fact-checked image of news anchors where the claim about them was found to be false. Stefani Reynolds/AFP)
This photo illustration created in Washington, on November 17, 2023, shows a phone screen showing a social media video marked as an "altered video" in front of a fact-checked image of news anchors where the claim about them was found to be false. Stefani Reynolds/AFP)

There has been an increase in concerns regarding deepfake technology, especially after a robocall pretending to be President Joe Biden was used to discourage people from voting in New Hampshire’s primary election. The call’s source was traced back to a company in Texas.

This incident has prompted the Federal Communications Commission (FCC) to take action against AI-generated robocalls. On Feb. 8, the FCC announced “the unanimous adoption” of a ruling that makes voice-cloning technology used in robocalls illegal.

FCC Chairwoman Jessica Rosenworcel said that bad actors are using AI-generated voices in unsolicited robocalls “to extort vulnerable family members, imitate celebrities, and misinform voters.”

“We’re putting the fraudsters behind these robocalls on notice,” Ms. Rosenworcel stated.

Meanwhile, a group of 20 Big Tech companies have pledged to “prevent deceptive” use of AI and track down its creators as the United States and other countries head into elections in 2024.

Deepfakes of political candidates, election officials, and “other key stakeholders” in elections this year will be under the microscopes of Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic, and X (formerly Twitter).

The technology giants, some of which have been embroiled in controversy over censoring disfavored political views during elections, have signed a pact to combat the deceptive use of AI in the 2024 elections.

Caden Pearson contributed to this report.