Malicious Actors Using AI to Impersonate Voices of Senior US Officials, FBI Warns

The warning comes as a House bill is under consideration that seeks to increase penalties for engaging in AI impersonation.
Malicious Actors Using AI to Impersonate Voices of Senior US Officials, FBI Warns
The FBI headquarters in Washington on Nov. 6, 2023. Madalina Vasiliu/The Epoch Times
Naveen Athrappully
Updated:
0:00
An ongoing “malicious messaging campaign” is impersonating government officials through the use of voice messages generated via artificial intelligence (AI), the FBI said in a May 15 public service announcement.

“Since April 2025, malicious actors have impersonated senior U.S. officials to target individuals, many of whom are current or former senior U.S. federal or state government officials and their contacts. If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic,” the alert said.

“The malicious actors have sent text messages and AI-generated voice messages—techniques known as smishing and vishing, respectively—that claim to come from a senior U.S. official in an effort to establish rapport before gaining access to personal accounts.”

After a threat actor gains access to a personal or official account of a U.S. official, they use the credibility of these accounts to target other government officials or other people.

Contact information acquired through these schemes may be used by threat actors in impersonation scams to dupe targets into providing information or funds.

“Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical,” the FBI said.

“Never share sensitive information or an associate’s contact information with people you have met only online or over the phone.

“If contacted by someone you know well via a new platform or phone number, verify the new contact information through a previously confirmed platform or trusted source.”

The Cybersecurity and Infrastructure Security Agency (CISA) issued a similar warning in June last year.

“Impersonation scams are on the rise and often use the names and titles of government employees,” CISA said in a June 12, 2024, statement.

CISA said it was “aware of recent impersonation scammers claiming to represent the agency. As a reminder, CISA staff will never contact you with a request to wire money, cash, cryptocurrency, or use gift cards and will never instruct you to keep the discussion secret.”

On Dec. 3, 2024, the FBI issued an alert about scammers using generative artificial intelligence to commit financial fraud.

“Criminals generate short audio clips containing a loved one’s voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom,” it said.

A similar case of impersonation occurred in 2023 when an individual tried to dupe a mother from Arizona by using an AI voice clone of her 15-year-old daughter. In this case, the mother was quickly able to verify that her daughter was actually in the house, thus foiling the scam.

In the May 15 alert, the FBI advised people to create a secret phrase or word with their family members to verify each other’s identities.

In addition, “do not click on any links in an email or text message until you independently confirm the sender’s identity,” the agency said.

Tackling Impersonation Crimes

In April last year, a new Federal Trade Commission (FTC) rule on government and business impersonation came into effect.
“The rule gives the agency stronger tools to combat and deter scammers who impersonate government agencies and businesses, enabling the FTC to file federal court cases seeking to get money back to injured consumers and civil penalties against rule violators,” the agency said in an April 1, 2024, statement.

Some of the most common tactics that impersonators use are phony subscription renewals, fake giveaways, bogus legal issues, and made-up package delivery problems.

In February this year, Rep. Eric Sorensen (D-Ill.) reintroduced the Quashing Unwanted and Interruptive Electronic Telecommunications (QUIET) Act in the House of Representatives that aims to tackle the issue of AI voice scams, the lawmaker’s office said in a Feb. 5 statement.

The bill increases penalties for fraudsters who use AI to impersonate individuals and organizations with the aim of defrauding or causing harm to people.

The legislation is expected to strengthen protections for senior citizens and vulnerable communities, who are some of the biggest targets. The bill is yet to be passed by the House.

“Illinoisans are already stretched thin with high costs making every dollar count,” Sorensen said. “The last thing our seniors and working families need is to be bombarded with attacks from scam callers using new technologies to impersonate their loved ones, their bank, or the government.”

Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.