Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Adversarial Robustness Improvement for Deep Neural Networks

View through CrossRef
Abstract Deep neural networks (DNNs) are key components for the implementation of autonomy in systems that operate in highly complex and unpredictable environments (self-driving cars, smart traffic systems, smart manufacturing etc.). It is well known that DNNs are vulnerable to adversarial examples, i.e. minimal and usually imperceptible perturbations, applied to their inputs, leading to false predictions. This threat poses critical challenges, especially when DNNs are deployed in safety or security-critical systems, and renders as urgent the need for defences that can improve the trustworthiness of DNN functions. Adversarial training has proven effective in improving the robustness of DNNs against a wide range of adversarial perturbations. However, a general framework for adversarial defences is needed that will extend beyond a single dimensional assessment of robustness improvement; it is essential to consider simultaneously several distance metrics and adversarial attack strategies. Using such an approach we report the results from extensive experimentation on adversarial defence methods that could improve DNNs resilense to adversarial threats. We wrap up by introducing a general adversarial training methodology, which according to our experimental results, opens prospects for an holistic defence against a range of diverse types of adversarial perturbations.
Title: Adversarial Robustness Improvement for Deep Neural Networks
Description:
Abstract Deep neural networks (DNNs) are key components for the implementation of autonomy in systems that operate in highly complex and unpredictable environments (self-driving cars, smart traffic systems, smart manufacturing etc.
).
It is well known that DNNs are vulnerable to adversarial examples, i.
e.
minimal and usually imperceptible perturbations, applied to their inputs, leading to false predictions.
This threat poses critical challenges, especially when DNNs are deployed in safety or security-critical systems, and renders as urgent the need for defences that can improve the trustworthiness of DNN functions.
Adversarial training has proven effective in improving the robustness of DNNs against a wide range of adversarial perturbations.
However, a general framework for adversarial defences is needed that will extend beyond a single dimensional assessment of robustness improvement; it is essential to consider simultaneously several distance metrics and adversarial attack strategies.
Using such an approach we report the results from extensive experimentation on adversarial defence methods that could improve DNNs resilense to adversarial threats.
We wrap up by introducing a general adversarial training methodology, which according to our experimental results, opens prospects for an holistic defence against a range of diverse types of adversarial perturbations.

Related Results

Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
Improving Diversity and Quality of Adversarial Examples in Adversarial Transformation Network
Improving Diversity and Quality of Adversarial Examples in Adversarial Transformation Network
Abstract This paper proposes a method to mitigate two major issues of Adversarial Transformation Networks (ATN) including the low diversity and the low quality of adversari...
Adversarial Training and Robustness in Machine Learning Frameworks
Adversarial Training and Robustness in Machine Learning Frameworks
In the realm of machine learning, ensuring robustness against adversarial attacks is increasingly crucial. Adversarial training has emerged as a prominent strategy to fortify model...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Improving Intrusion Detection Robustness Through Adversarial Training Methods
Improving Intrusion Detection Robustness Through Adversarial Training Methods
Network Intrusion Detection Systems (NIDS) leveraging deep learning architectures have demonstrated exceptional performance in identifying cyber threats through automated feature l...

Back to Top