(Biometric Update)—U.S. President Joe Biden is not robocalling voters to tell them not to vote in state primaries – and Pindrop knows which AI text-to-speech (TTS) engine was used to fake his voice. A post written by the voice fraud detection firm’s CEO says its software analyzed spectral and temporal artifacts in the audio to determine that the biometric deepfake came from generative speech synthesis startup ElevenLabs.
“Pindrop’s deepfake engine analyzed the 39-second audio clip through a four-stage process,” writes CEO Vijay Balasubramaniyan. “Audio filtering & cleansing, feature extraction, breaking the audio into 155 segments of 250 milliseconds each, and continuous scoring all 155 segments of the audio.” Each segment is assigned a liveness score indicating potential artificiality.
Pindrop’s system replicates end-user listening conditions by simulating typical phone channel conditions. Using a deep neural network, it outputs low-level spectro-temporal features as a fakeprint – “a unit-vector low-rank mathematical representation preserving the artifacts that distinguish between machine-generated vs. generic human speech.” Artifacts tend to show up more prominently in phrases with linguistic fricatives and, in the case of the Biden audio, in phrases the president is unlikely to have uttered.
Balasubramaniyan points out that, “even though the attackers used ElevenLabs this time, it is likely to be a different Generative AI system in future attacks.” For its part, ElevenLabs has suspended the creator of the Biden deepfake, according to Bloomberg.
The Pindrop Co-founder and CEO wrote about the potential of biometric liveness detection as a defense against deepfakes in an August Biometric Update guest post.
Cause the fakers gonna fake, fake, fake, deepfake
Few forces in the current universal order command as much attention and have as much power to cause major shifts in culture as generative AI. One such force, however, is Swifites. Fans of Taylor Swift have mustered a campaign to purge the internet of pornographic deepfakes of the iconic performer that generated millions of views on the social media network X, Elon Musk’s less-regulated incarnation of Twitter. The issue has even reached the White House, which expressed “alarm” at the circulation of the fake Swift images.
Speaking to ABC news, White House Press Secretary Karine Jean-Pierre said that “while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people.”
In response to the concern, X temporarily paused searches for the singer’s name and pledged to help Swifties get the images taken down. The user accused of creating the images, Toronto man Zubear Abdi, has made his account private. Toronto-based music publication Exclaim! reports that Swift is considering suing Abdi.
But, it says, the Swifties may get to him first.
The bipartisan “Preventing Deepfakes of Intimate Images Act,” drafted to address the issue of sexually explicit AI-generated deepfakes, is currently referred to the U.S. House Committee on the Judiciary.