The Federal Communications Commission plans to vote to make the use of AI-generated voices in robocalls illegal. The FCC said AI-generated voices in robocalls have “intensified in recent years” and have “the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates and of close family members.
The declaratory ruling proposed by FCC Chairman Jessica Rosenworcel would rule that “calls made with AI-generated voices are 'artificial' voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocall scams targeting consumers illegal,” the commission announced yesterday. Commissioners would have will vote on the proposal in the coming weeks.
A recent anti-voting robocall used an artificially generated version of President Joe Biden's voice. The calls asked Democrats not to vote in New Hampshire's presidential primary elections.
A analysis by the company Pindrop concluded that Biden's artificial voice was created using a speech synthesis engine offered by ElevenLabs. This conclusion has apparently been confirmed by ElevenLabs, which would have suspended the account of the user who created the deepfake.
FCC decision could help states crack down
The TCPA, a 1991 U.S. law, prohibits the use of artificial or prerecorded voices in most non-emergency calls “without the express prior consent of the called party.” The FCC is responsible for writing rules to implement the law, which is punishable by fines.
As the FCC noted yesterday, the TCPA “restricts telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.” Telemarketers are required “to first obtain express written consent from consumers before calling them in an automated manner. If successfully passed, this declaratory ruling would ensure that AI-generated voice calls are also subject to the same standards “.
The FCC has been considering revising its rules to take artificial intelligence into account for at least a few months. In November 2023, it launched an investigation on the impact of AI on robocalls and robotstexts.
Rosenworcel said the proposed ruling would “recognize this emerging technology as illegal under existing law, giving our partners in state attorneys general offices across the country new tools they can use to crack down on these scams and protect consumers.
“AI-generated voice and image cloning is already sowing confusion by making consumers believe scams and frauds are legitimate,” Rosenworcel said. “No matter which celebrity or politician you favor, or the relationship you have with your loved ones when they call for help, it’s possible that we are all targets of these fake calls.”