Javascript must be enabled to continue!
How Frequency and Harmonic Profiling of a ‘Voice’ Can Inform Authentication of Deepfake Audio: An Efficiency Investigation
View through CrossRef
As life in the digital era becomes more complex, the capacity for criminal activity within the digital realm becomes even more widespread. More recently, the development of deepfake media generation powered by Artificial Intelligence pushes audio and video content into a realm of doubt, misinformation, or misrepresentation. The instances of deepfake videos are numerous, with some infamous cases ranging from manufactured graphic images of the musician Taylor Swift, through to the loss of $25 million dollars transferred after a faked video call. The problems of deepfake are becoming increasingly concerning for the general public when such material is submitted into evidence in a court case, especially a criminal trial. The current methods of authentication against such deepfake evidence threats are insufficient. When considering speech within audio forensics, there is sufficient ‘individuality’ in one’s own voice to enable comparison for identification. In the case of authenticating audio for deepfake speech, it is possible to use this same comparative approach to identify rogue or incomparable harmonic and formant patterns within the speech. The presence of deepfake media within the realms of illegal activity demands appropriate legal enforcement, resulting in a requirement for robust detection methods. The work presented in this paper proposes a robust technique for identifying such AI-synthesized speech using a quantifiable method that proves to be justified within court proceedings. Furthermore, it presents the correlation between the harmonic content of human speech patterns and the AI-generated clones they produce. This paper details which spectrographic audio characteristics were found that may prove helpful towards authenticating speech for forensic purposes in the future. The results demonstrate that using specific frequency ranges to compare against a known audio sample of a person’s speech, indicates the presence of deepfake media due to different harmonic structures.
KEYWORDS: Artificial Intelligence, Digital Forensics, Speech Processing, Speech Analysis.
Sri Lanka Institute of Information Technology
Title: How Frequency and Harmonic Profiling of a ‘Voice’ Can Inform Authentication of Deepfake Audio: An Efficiency Investigation
Description:
As life in the digital era becomes more complex, the capacity for criminal activity within the digital realm becomes even more widespread.
More recently, the development of deepfake media generation powered by Artificial Intelligence pushes audio and video content into a realm of doubt, misinformation, or misrepresentation.
The instances of deepfake videos are numerous, with some infamous cases ranging from manufactured graphic images of the musician Taylor Swift, through to the loss of $25 million dollars transferred after a faked video call.
The problems of deepfake are becoming increasingly concerning for the general public when such material is submitted into evidence in a court case, especially a criminal trial.
The current methods of authentication against such deepfake evidence threats are insufficient.
When considering speech within audio forensics, there is sufficient ‘individuality’ in one’s own voice to enable comparison for identification.
In the case of authenticating audio for deepfake speech, it is possible to use this same comparative approach to identify rogue or incomparable harmonic and formant patterns within the speech.
The presence of deepfake media within the realms of illegal activity demands appropriate legal enforcement, resulting in a requirement for robust detection methods.
The work presented in this paper proposes a robust technique for identifying such AI-synthesized speech using a quantifiable method that proves to be justified within court proceedings.
Furthermore, it presents the correlation between the harmonic content of human speech patterns and the AI-generated clones they produce.
This paper details which spectrographic audio characteristics were found that may prove helpful towards authenticating speech for forensic purposes in the future.
The results demonstrate that using specific frequency ranges to compare against a known audio sample of a person’s speech, indicates the presence of deepfake media due to different harmonic structures.
KEYWORDS: Artificial Intelligence, Digital Forensics, Speech Processing, Speech Analysis.
Related Results
Evaluating the Threshold of Authenticity in Deepfake Audio and Its Implications Within Criminal Justice
Evaluating the Threshold of Authenticity in Deepfake Audio and Its Implications Within Criminal Justice
Deepfake technology has come a long way in recent years and the world has already seen cases where it has been used maliciously. After a deepfake of UK independent financial adviso...
Deepfake Detection with Choquet Fuzzy Integral
Deepfake Detection with Choquet Fuzzy Integral
Deep forgery has been spreading quite quickly in recent years and
continues to develop. The development of deep forgery has been used in
films. This development and spread have beg...
An Efficient Blockchain-Based Verification Scheme with Transferable Authentication Authority
An Efficient Blockchain-Based Verification Scheme with Transferable Authentication Authority
Abstract
In some situations, the transfer of authentication authority is necessary for user authentication. In traditional authentication, a trust mechanism based on a trus...
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
<p dir="ltr">Most individuals with Parkinson's disease (PD) experience changes in speech, voice or communication. Speech changes often manifest as hypokinetic dysarthria, a m...
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
Speech, communication, and neuroimaging in Parkinson's disease : characterisation and intervention outcomes
<p dir="ltr">Most individuals with Parkinson's disease (PD) experience changes in speech, voice or communication. Speech changes often manifest as hypokinetic dysarthria, a m...
Speech, communication, and neuroimaging in Parkinson's disease : Characterisation and intervention outcomes
Speech, communication, and neuroimaging in Parkinson's disease : Characterisation and intervention outcomes
<p dir="ltr">Most individuals with Parkinson's disease (PD) experience changes in speech, voice or communication. Speech changes often manifest as hypokinetic dysarthria, a m...
A New Deepfake Detection Method Based on Compound Scaling Dual-Stream Attention Network
A New Deepfake Detection Method Based on Compound Scaling Dual-Stream Attention Network
INTRODUCTION: Deepfake technology allows for the overlaying of existing images or videos onto target images or videos. The misuse of this technology has led to increasing complexit...
An Authentication and Key Agreement Scheme Based on Roadside Unit Cache for VANET
An Authentication and Key Agreement Scheme Based on Roadside Unit Cache for VANET
Vehicular Ad Hoc Network (VANET) is a wireless Mobile Ad Hoc Network that is used for communication between vehicles, vehicles and fixed access points, and vehicles and pedestrians...

