Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Mitigating Adversarial Attacks Uncertainty Through Interval Analysis

View through CrossRef
Abstract The adversarial attack is characterized by a high attack success rate and a fast generation of examples. It is widely used in neural network robustness evaluation and adversarial training. Restricted by the randomness of the initialization of the attack point and the iterative finding algorithm can not guarantee that it can reach the global optimal solution, the existing adversarial attack methods have attack uncertainty in a single attack and need to increase the number of attacks in order to improve the attack success rate. This paper defines the label susceptibility to analyze the attack effect. For adversarial data with high label susceptibility, using interval analysis to find the adversarial examples in its neighbourhood can effectively alleviate the attack uncertainty and improve the attack success rate. Experimental results on multiple datasets show that for white-box and black-box attack methods, our method achieves attack success rates that can surpass those attained by baseline methods requiring significantly more attack attempts while maintaining superior computational efficiency.
Title: Mitigating Adversarial Attacks Uncertainty Through Interval Analysis
Description:
Abstract The adversarial attack is characterized by a high attack success rate and a fast generation of examples.
It is widely used in neural network robustness evaluation and adversarial training.
Restricted by the randomness of the initialization of the attack point and the iterative finding algorithm can not guarantee that it can reach the global optimal solution, the existing adversarial attack methods have attack uncertainty in a single attack and need to increase the number of attacks in order to improve the attack success rate.
This paper defines the label susceptibility to analyze the attack effect.
For adversarial data with high label susceptibility, using interval analysis to find the adversarial examples in its neighbourhood can effectively alleviate the attack uncertainty and improve the attack success rate.
Experimental results on multiple datasets show that for white-box and black-box attack methods, our method achieves attack success rates that can surpass those attained by baseline methods requiring significantly more attack attempts while maintaining superior computational efficiency.

Related Results

Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Abstract An important goal of geostatistical modeling is to assess output uncertainty after processing realizations through a transfer function, in particular, to...
Deception-Based Security Framework for IoT: An Empirical Study
Deception-Based Security Framework for IoT: An Empirical Study
<p><b>A large number of Internet of Things (IoT) devices in use has provided a vast attack surface. The security in IoT devices is a significant challenge considering c...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Governance Considerations of Adversarial Attacks on AI Systems
Governance Considerations of Adversarial Attacks on AI Systems
Artificial intelligence (AI) is increasingly integrated into various aspects of daily life, but its susceptibility to adversarial attacks poses significant governance challenges. T...
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial dete...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Studies on Sensitivity and Uncertainty Analyses for SCOPE and WAFT With Uncertainty Propagation Methods
Studies on Sensitivity and Uncertainty Analyses for SCOPE and WAFT With Uncertainty Propagation Methods
The purpose of Steam condensation on cold plate experiment facility (SCOPE) and Water film test (WAFT) is to verify the steam condensation and water film evaporation correlation wi...
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
Manipulating Recommender Systems: A Survey of Poisoning Attacks and Countermeasures
Recommender systems have become an integral part of online services due to their ability to help users locate specific information in a sea of data. However, existing studies show ...

Back to Top