FCC Proposes New Disclosure Rules for Entities Using AI-Generated Robocalls

FCC Proposes New Disclosure Rules for Entities Using AI-Generated Robocalls
The seal of the Federal Communications Commission in Washington on Dec. 14, 2017. (Jacquelyn Martin/AP Photo)
Matt McGregor
Updated:
0:00

The Federal Communications Commission (FCC) has proposed new consumer protection policies against robocalls generated with artificial intelligence (AI).

These new policies would require companies to disclose that they are using AI-generated calls.

“Bad actors are already using AI technology in robocalls to mislead consumers and misinform the public,” FCC Chairwoman Jessica Rosenworcel said in a statement on July 16. “That’s why we want to put in place rules that empower consumers to avoid this junk and make informed decisions.”
The proposed rules build on the FCC’s 2023 Notice of Inquiry (NOI) seeking to examine how the federal agency can protect the consumer under the Telephone Consumer Protection Act (TCPA) of 1991, which restricts companies from using automatic dialing systems, artificial or prerecorded voice messages, and other communications equipment when soliciting residential customers.

The TCPA prohibits companies from initiating a residential call using this technology “without the prior express consent of the called party.”

It also prohibits “any telephone facsimile machine, computer, or other device” from sending advertisements unless there has been an established relationship or if the company acquired the resident’s number from a location where it was knowingly submitted, such as a phone book or internet site.

The TCPA authorizes the FCC to “prescribe technical and procedural standards for systems that are used to transmit any artificial or prerecorded messages via telephone.”

“Complaints regarding unwanted and illegal robocalls and robotexts are consistently the top category of consumer complaints that we receive,” the FCC said in its report.

According to the NOI, the advancement of AI technology could bring more opportunities for the robocall and robotext business.

AI Technologies Defined

The FCC defines AI technologies as “any program which emulates any aspect of human intelligence, such as a human voice.”

FCC Commissioner Geoffrey Starks said in the NOI that because the future of AI is uncertain, intersecting agencies must evaluate its potential to “impact, if not transform” society.

“Because of that potential, each part of our government bears a responsibility to better understand the risks and opportunities presented within its mandate, while being mindful of the limits of its experience and its authority,” he said. “And in this era of rapid technological change, we must collaborate, lending our learnings and sharing our expertise across agencies to better serve our citizens and consumers.”

The FCC also recently declared the use of voice-cloning technology in robocalls to be illegal.
In May, New Hampshire Attorney General John Formella indicted 54-year-old Steve Kramer on 13 felony counts of misdemeanor impersonation of a candidate for allegedly using AI calls imitating President Joe Biden’s voice to tell people not to vote in the state’s primary election this past January.

The FCC has proposed a $6 million fine against Mr. Kramer.

In the FCC’s report on the case, Commissioner Anna Gomez said that Mr. Kramer’s alleged use of voice-cloning “exemplifies AI technology being harnessed for harm.”

“The consequences for consumers and the threat to our democratic processes warrant a strong response,” Ms. Gomez said. “That is why this proposed penalty is so important, as the Commission must do what is within our power to deter scams manipulating AI to prey on consumers and to threaten our democratic processes.”

Jana Pruet contributed to this report.