Ringo Telecom has agreed to pay $1 million in a settlement with the Federal Communications Commission over AI-generated robocalls designed to imitate President Joe Biden's voice and disrupt the 2024 New Hampshire presidential primary.
The call was orchestrated by political consultant Steve Cramer, who was subsequently indicted on 13 felony counts of voter suppression and 13 misdemeanor counts of impersonating a candidate and was fined $6 million by the FCC in May of this year.
This is the latest in a series of attempts to use AI to sway voters ahead of the US presidential election in November.
In July, tech billionaire Elon Musk circulated an AI video online that falsely portrayed Vice President Kamala Harris saying things she never said. Musk later clarified that the video was intended as satire and shouldn't be taken at face value.
Michigan-based Ringo Telecom did not produce the deepfake material, but the FCC took action against the company for failing to comply with know-your-customer and know-your-upstream provider regulations, according to a statement on Wednesday.
In addition to the fine, Lingo Telecom has agreed to a number of measures to prevent its services from being used in this way again in the future, including:
We apply A-level authentication, the highest level of trust that can be given to a phone number, only to calls where Lingo Telecom itself provides the caller with a caller ID number. We obtain independent corroborating records to verify the identity and line of business of each customer and upstream provider. We have robust robocall mitigation mechanisms and only send traffic from upstream providers that respond to traceback requests.
Decrypt reached out to the FCC and Lingo Telecom for comment but had not received a response at the time of writing.
FCC Enforcement Director Royan A. Egal said in a statement that the settlement is expected to make communications service providers the first line of defense against the threat of deepfakes and sends a “strong message” that the FCC will hold them accountable.
The potential for deepfakes to mislead voters has emerged as a significant concern during the current election cycle, with reports earlier this week that Donald Trump is using AI-generated deepfakes of Taylor Swift, Elon Musk, and political rival Kamala Harris to aid his re-election campaign.
Editor: Sebastian Sinclair
Generally intelligent newsletter
A weekly AI journey narrated by generative AI model Gen.
Source link