The U.S. Federal Trade Commission (FTC) on Feb. 15 proposed modifying a rule that currently bans the impersonation of government and businesses also to include a ban on the impersonation of individuals.
The FTC said it had finalized the rule allowing the commission to file federal court cases to force scammers to return the money they made from impersonating government and businesses.
The commission has now proposed expanding the rule to prohibit the impersonation of individuals.
“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” Ms. Khan said.
“With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” she added.
The proposed rule changes follow “surging complaints” around impersonation fraud and “public outcry” about the harms caused to consumers and to impersonated individuals, according to the FTC.
“Emerging technology–including AI-generated deepfakes–threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud,” it stated.
The FTC said it is seeking public input on whether the revised rule should include provisions prohibiting the use of AI platforms for impersonation.
“As scammers find new ways to defraud consumers, including through AI-generated deepfakes, this proposal will help the agency deter fraud and secure redress for harmed consumers,” it stated.
According to the FTC, government and business impersonation frauds have cost consumers “billions of dollars” in recent years, and both categories saw “significant increases” last year.
There has been an increase in concerns regarding deepfake technology, especially after a robocall pretending to be President Joe Biden was used to discourage people from voting in New Hampshire’s primary election. The call’s source was traced back to a company in Texas.
FCC Chairwoman Jessica Rosenworcel said that bad actors are using AI-generated voices in unsolicited robocalls “to extort vulnerable family members, imitate celebrities, and misinform voters.”
“We’re putting the fraudsters behind these robocalls on notice,” Ms. Rosenworcel stated.
Deepfakes of political candidates, election officials, and “other key stakeholders” in elections this year will be under the microscopes of Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic, and X (formerly Twitter).
The technology giants, some of which have been embroiled in controversy over censoring disfavored political views during elections, have signed a pact to combat the deceptive use of AI in the 2024 elections.