Taming the Voice-Cloning Beast: New Tech Takes on Deepfakes
Are voice deepfakes the new frontier of fraud, or just another tech scare?
Audio deepfakes have created a dilemma, balancing benefits for speech impairments against risks like fraud and misinformation. The FTC's Voice Cloning Challenge spurred innovative solutions to counter these threats.
One winning entry, OriginStory, developed a microphone that validates human speech by detecting biosignals, embedding this verification as a watermark. Another, AI Detect from OmniSpeech, uses embedded machine learning to identify fake voices in real time on devices like phones and earbuds. DeFake, by Washington University, employs adversarial AI, adding perturbations to recordings to thwart cloning.
While promising, these technologies are still in development, highlighting the ongoing need for robust defenses against evolving deepfake capabilities. Can these solutions keep up with the rapid advances in AI?
Read the full article on IEEE Spectrum.
----