Lingo Telecom has agreed to pay $1 million in a settlement with the Federal Communications Commission over AI-generated robocalls that mimicked President Joe Biden’s voice and were designed to interfere with the 2024 New Hampshire presidential primary. 

The calls were orchestrated by Steve Kramer, a political consultant who was subsequently charged with 13 felony counts of voter suppression and 13 misdemeanor counts of impersonation of a candidate and fined $6 million by the FCC earlier this year in May.

It's the latest in a string of incidents using AI to sway voters' opinions ahead of the U.S. presidential election in November. 

In July, a fake AI video was circulated online by tech billionaire Elon Musk that falsely showed Vice President Kamala Harris making statements she never said. Musk later clarified that the video was intended to be satire and not meant to be taken at face value.

While Michigan-based Lingo Telecom had not produced the deepfake material, the FCC took action against the company for failing in its duty to comply with Know Your Customer and Know Your Upstream Provider regulations, according to a Wednesday statement. 

In addition to the financial penalty, Lingo Telecom has agreed to several measures that will prevent its service from being used in this way again in the future. Those include:

  • Applying an A-level attestation, which is the highest level of trust attributed to a phone number, only to a call where Lingo Telecom itself has provided the caller ID number to the party making the call. 
  • Verify each customer and upstream provider's identity and line of business by obtaining independent corroborating records.
  • Transmitting traffic only from upstream providers with robust robocall mitigation mechanisms in place and responsive to traceback requests.

Decrypt has contacted the FCC and Lingo Telecom for comment but has not heard back at the time of writing.

In a statement, FCC Enforcement Bureau Chief Loyaan A. Egal said the settlement sends a “strong message” that communications service providers are expected to be the first line of defense against deepfake threats and that the FCC will hold them accountable.

The potential for deepfakes to mislead voters has emerged as a significant concern during the current election cycle. Earlier this week, it was reported that Donald Trump has been using AI-generated deepfakes of Taylor Swift, Elon Musk, and political opponent Kamala Harris to support his campaign for a second election.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.