Javascript must be enabled to continue!
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
View through CrossRef
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weaknesses in deep learning architectures. This study investigates the vulnerability and robustness of video-based deepfake detection models, specifically comparing a Long Short-Term Convolutional Neural Network (LST-CNN) with adversarial perturbations using the Fast Gradient Sign Method (FGSM) attacks. We evaluate the performance of the models under both clean and adversarial conditions, highlighting the impact of adversarial modifications on detection accuracy. Our results show that adversarial attacks, even with slight perturbations, significantly reduce the accuracy of the models, with the baseline LST-CNN experiencing sharp performance degradation under FGSM attacks. However, models trained with adversarial examples exhibit enhanced resilience, maintaining higher accuracy under attack conditions. The study also evaluates defense strategies, such as adversarial training and input preprocessing, that help improve model robustness. These findings underscore the critical need for robust defense mechanisms to secure deepfake detection models and provide insights into improving model reliability in real-world applications, where adversarial manipulation is a growing concern.
Title: Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Description:
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weaknesses in deep learning architectures.
This study investigates the vulnerability and robustness of video-based deepfake detection models, specifically comparing a Long Short-Term Convolutional Neural Network (LST-CNN) with adversarial perturbations using the Fast Gradient Sign Method (FGSM) attacks.
We evaluate the performance of the models under both clean and adversarial conditions, highlighting the impact of adversarial modifications on detection accuracy.
Our results show that adversarial attacks, even with slight perturbations, significantly reduce the accuracy of the models, with the baseline LST-CNN experiencing sharp performance degradation under FGSM attacks.
However, models trained with adversarial examples exhibit enhanced resilience, maintaining higher accuracy under attack conditions.
The study also evaluates defense strategies, such as adversarial training and input preprocessing, that help improve model robustness.
These findings underscore the critical need for robust defense mechanisms to secure deepfake detection models and provide insights into improving model reliability in real-world applications, where adversarial manipulation is a growing concern.
Related Results
Evaluating the Threshold of Authenticity in Deepfake Audio and Its Implications Within Criminal Justice
Evaluating the Threshold of Authenticity in Deepfake Audio and Its Implications Within Criminal Justice
Deepfake technology has come a long way in recent years and the world has already seen cases where it has been used maliciously. After a deepfake of UK independent financial adviso...
Deepfake Detection with Choquet Fuzzy Integral
Deepfake Detection with Choquet Fuzzy Integral
Deep forgery has been spreading quite quickly in recent years and
continues to develop. The development of deep forgery has been used in
films. This development and spread have beg...
A New Deepfake Detection Method Based on Compound Scaling Dual-Stream Attention Network
A New Deepfake Detection Method Based on Compound Scaling Dual-Stream Attention Network
INTRODUCTION: Deepfake technology allows for the overlaying of existing images or videos onto target images or videos. The misuse of this technology has led to increasing complexit...
How Frequency and Harmonic Profiling of a ‘Voice’ Can Inform Authentication of Deepfake Audio: An Efficiency Investigation
How Frequency and Harmonic Profiling of a ‘Voice’ Can Inform Authentication of Deepfake Audio: An Efficiency Investigation
As life in the digital era becomes more complex, the capacity for criminal activity within the digital realm becomes even more widespread. More recently, the development of deepfak...
Enhancing Real-Time Video Processing With Artificial Intelligence: Overcoming Resolution Loss, Motion Artifacts, And Temporal Inconsistencies
Enhancing Real-Time Video Processing With Artificial Intelligence: Overcoming Resolution Loss, Motion Artifacts, And Temporal Inconsistencies
Purpose: Traditional video processing techniques often struggle with critical challenges such as low resolution, motion artifacts, and temporal inconsistencies, especially in real-...
Deception-Based Security Framework for IoT: An Empirical Study
Deception-Based Security Framework for IoT: An Empirical Study
<p><b>A large number of Internet of Things (IoT) devices in use has provided a vast attack surface. The security in IoT devices is a significant challenge considering c...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Novel Convolutional Neural Networks based Jaya algorithm Approach for Accurate Deepfake Video Detection
Novel Convolutional Neural Networks based Jaya algorithm Approach for Accurate Deepfake Video Detection
Deepfake videos are becoming an increasing concern due to their potential to spread misinformation and cause harm. In this paper, we propose a novel approach for accurately detecti...

