Javascript must be enabled to continue!
Minimum Adversarial Examples
View through CrossRef
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization models: (1) taking the successful attack as the objective function and limiting perturbations as the constraint; (2) taking the minimum of adversarial perturbations as the target and the successful attack as the constraint. These all involve two fundamental problems of AEs: the minimum boundary of constructing the AEs and whether that boundary is reachable. The reachability means whether the AEs of successful attack models exist equal to that boundary. Previous optimization models have no complete answer to the problems. Therefore, in this paper, for the first problem, we propose the definition of the minimum AEs and give the theoretical lower bound of the amplitude of the minimum AEs. For the second problem, we prove that solving the generation of the minimum AEs is an NPC problem, and then based on its computational inaccessibility, we establish a new third optimization model. This model is general and can adapt to any constraint. To verify the model, we devise two specific methods for generating controllable AEs under the widely used distance evaluation standard of adversarial perturbations, namely Lp constraint and SSIM constraint (structural similarity). This model limits the amplitude of the AEs, reduces the solution space’s search cost, and is further improved in efficiency. In theory, those AEs generated by the new model which are closer to the actual minimum adversarial boundary overcome the blindness of the adversarial amplitude setting of the existing methods and further improve the attack success rate. In addition, this model can generate accurate AEs with controllable amplitude under different constraints, which is suitable for different application scenarios. In addition, through extensive experiments, they demonstrate a better attack ability under the same constraints as other baseline attacks. For all the datasets we test in the experiment, compared with other baseline methods, the attack success rate of our method is improved by approximately 10%.
Title: Minimum Adversarial Examples
Description:
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs).
Existing methods of AE generation use two optimization models: (1) taking the successful attack as the objective function and limiting perturbations as the constraint; (2) taking the minimum of adversarial perturbations as the target and the successful attack as the constraint.
These all involve two fundamental problems of AEs: the minimum boundary of constructing the AEs and whether that boundary is reachable.
The reachability means whether the AEs of successful attack models exist equal to that boundary.
Previous optimization models have no complete answer to the problems.
Therefore, in this paper, for the first problem, we propose the definition of the minimum AEs and give the theoretical lower bound of the amplitude of the minimum AEs.
For the second problem, we prove that solving the generation of the minimum AEs is an NPC problem, and then based on its computational inaccessibility, we establish a new third optimization model.
This model is general and can adapt to any constraint.
To verify the model, we devise two specific methods for generating controllable AEs under the widely used distance evaluation standard of adversarial perturbations, namely Lp constraint and SSIM constraint (structural similarity).
This model limits the amplitude of the AEs, reduces the solution space’s search cost, and is further improved in efficiency.
In theory, those AEs generated by the new model which are closer to the actual minimum adversarial boundary overcome the blindness of the adversarial amplitude setting of the existing methods and further improve the attack success rate.
In addition, this model can generate accurate AEs with controllable amplitude under different constraints, which is suitable for different application scenarios.
In addition, through extensive experiments, they demonstrate a better attack ability under the same constraints as other baseline attacks.
For all the datasets we test in the experiment, compared with other baseline methods, the attack success rate of our method is improved by approximately 10%.
Related Results
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial dete...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Heuristic Black-Box Adversarial Attacks on Video Recognition Models
Heuristic Black-Box Adversarial Attacks on Video Recognition Models
We study the problem of attacking video recognition models in the black-box setting, where the model information is unknown and the adversary can only make queries to detect the pr...
Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
<p> Despite recent progress, the susceptibility of machine learning models to adversarial examples remains a challenge —which calls for rethinking the defense strategy. In th...
Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
<p> Despite recent progress, the susceptibility of machine learning models to adversarial examples remains a challenge —which calls for rethinking the defense strategy. In th...
Determining the relationship between unemployment and minimum wage in Turkey
Determining the relationship between unemployment and minimum wage in Turkey
The minimum wage, which has increased more than tenfold in Turkey since 2014, has been a controversial topic for Turkish economic policy in the last few years. This controversy is ...

