To crack down on fraudulent activities and safeguard consumer interests, the Federal Communications Commission (FCC) has officially prohibited using artificial intelligence-generated voices in unwarranted robocalls across the United States.
This move follows an incident where New Hampshire residents received fabricated voice messages mimicking U.S. President Joe Biden, advising against participation in the state’s primary election.
FCC Extends TCPA Protections
The ban, implemented under the Telephone Consumer Protection Act (TCPA), represents a step towards curbing the proliferation of robocall scams.
FCC Chairwoman Jessica Rosenworcel stated, “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice.”
Robocall scams, already outlawed under the TCPA, rely on sophisticated AI technology to mimic voices and deceive unsuspecting recipients. The latest ruling extends the prohibition to cover “voice cloning technology,” effectively stopping an essential tool used by scammers in fraudulent schemes.
We’re proud to join in this effort to protect consumers from AI-generated robocalls with a cease-and-desist letter sent to the Texas-based company in question. https://t.co/qFtpf7eR2X https://t.co/ki2hVhH9Fv
— The FCC (@FCC) February 7, 2024
The TCPA aims to protect consumers from intrusive communications and “junk calls” by imposing restrictions on telemarketing practices, including using artificial or pre-recorded voice messages.
In a statement, the FCC emphasized the potential for such technology to spread misinformation by impersonating authoritative figures. While law enforcement agencies have traditionally targeted the outcomes of fraudulent robocalls, the new ruling empowers them to prosecute offenders solely for using AI to fabricate voices in such communications.
Texas Firm Linked to Biden Robocall
In a related development, authorities have traced a recent high-profile robocall incident imitating President Joe Biden’s voice back to a Texas-based firm named Life Corporation and an individual identified as Walter Monk.
Attorney General Mayes has since sent a warning letter to the company. “Using AI to impersonate the President and lie to voters is beyond unacceptable,” said Mayes. She also emphasized that deceptive practices like this have no place in their democracy and would only further diminish public trust in the electoral process.
I stand with 50 attorneys general in pushing back against a company that allegedly used AI to impersonate the President in scam robocalls ahead of the New Hampshire primary. Deceptive practices such as this have no place in our democracy. https://t.co/CqucNaEQGn pic.twitter.com/ql4FQzutdl
— AZ Attorney General Kris Mayes (@AZAGMayes) February 8, 2024
Attorney General John Formella has also confirmed that a cease-and-desist letter has been issued to the company, and a criminal investigation is underway.
“We are committed to keeping our elections free and fair,” asserted Attorney General Formella during a press conference in Concord, New Hampshire. He condemned the robocall as an attempt to exploit AI technology to undermine the democratic process, vowing to pursue strict legal measures against perpetrators.
The robocall, circulated on January 21 to thousands of Democratic voters, urged recipients to abstain from voting in the primary election to preserve their votes for the subsequent November election.
The post US Bans AI-Generated Voices Used in Scam Robocalls After Biden Impersonation Frauds appeared first on CryptoPotato.