Skip to main content

Best Way to Bust Deepfakes? Use AI to Find Real Signs of Life, Say Klick Labs Scientists

Researchers identify audio deepfakes with new algorithm and vocal biomarkers

Artificial intelligence may make it difficult for even the most discerning ears to detect deepfake voices – as recently evidenced in the fake Joe Biden robocall and the bogus Taylor Swift cookware ad on Meta – but scientists at Klick Labs say the best approach might actually come down to using AI to look for what makes us human.

Inspired by their clinical studies using vocal biomarkers to help enhance health outcomes, and their fascination with sci-fi films like “Blade Runner,” the Klick researchers created an audio deepfake detection method that taps into signs of life, such as breathing patterns and micropauses in speech.

“Our findings highlight the potential to use vocal biomarkers as a novel approach to flagging deepfakes because they lack the telltale signs of life inherent in authentic content,” said Yan Fossat, senior vice president of Klick Labs and principal investigator of the study. “These signs are usually undetectable to the human ear, but are now discernible thanks to machine learning and vocal biomarkers.”

‘Investigation of Deepfake Voice Detection using Speech Pause Patterns: Algorithm Development and Validation,’ published today in the open-access journal JMIR Biomedical Engineering, describes how vocal biomarkers, along with machine learning, can be used to distinguish between deepfakes and authentic audio with reliable precision. As part of the study, Fossat and his team at Klick Labs looked at 49 participants from diverse backgrounds and accents. Deepfake models were then trained on voice samples provided by the participants, and deepfake audio samples were generated for each person. After analyzing speech pause metrics, the scientists discovered their models could distinguish between the real and fakes with approximately 80 percent accuracy.

These findings follow recent high-profile voice cloning scams, Meta’s announced plan to introduce AI-generated content labels, and the Federal Communications Commission’s February ruling to make deepfake voices in robocalls illegal. In December, a PBS NewsHour report cited public policy and AI experts’ concerns that deepfake usage will increase with the upcoming U.S. presidential election.

While the new study offers one solution to this growing problem, Fossat acknowledged the need to keep evolving detection technology as deepfakes become more and more realistic.

Today’s news highlights Klick’s ongoing work in vocal biomarkers and AI. In October, it announced groundbreaking research in Mayo Clinic Proceedings: Digital Health around the AI model it created to detect Type 2 diabetes using 10 seconds of voice.

About Klick Applied Sciences (including Klick Labs)

Klick Applied Sciences’ diverse team of data scientists, engineers, and biological scientists conducts scientific research and develops AI/ML and software solutions as part of the company’s work to support commercial efforts using its proven business, scientific, medical, and technological expertise. Its 2019 Voice Assistants Medical Name Comprehension study laid the scientific foundation for rigorously testing voice assistant consumer devices in a controlled manner. Klick Applied Sciences is part of the Klick Group of companies, which also includes Klick Health (including Klick Katalyst and btwelve), Klick Media Group, Klick Consulting, Klick Ventures, and Sensei Labs. Established in 1997, Klick has offices in New York, Philadelphia, Toronto, London, São Paulo, and Singapore. Klick has consistently been ranked a Best Managed Company, Great Place to Work, Best Workplace for Women, Best Workplace for Inclusion, Best Workplace for Professional Services, and Most Admired Corporate Culture.

Contacts

For more information, or a copy of the abstract, please contact Klick PR at pr@klick.com or 416-214-4977.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.