Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Adversarial sample attack method based on loss smoothing

View through CrossRef
Deep neural networks (DNNs) are vulnerable to adversarial examples.Although the existing momentum-based adversarial example generation method can achieve a close 100%white-box attack success rate, it is still not ideal when attacking other models, and the black- box attack success rate is low. To address this, an adversarial example attack method based on loss smoothing is proposed to improve the transferability of adversarial examples. In the iterative process of calculating the gradient at each step, the current gradient is not used directly, but the local average gradient is used to accumulate momentum, so as to suppress the local oscillation phenomenon on the loss function surface, thereby stabilizing the update direction and escaping the local extreme point. A large number of experimental results on the ImageNet dataset show that compared with the existing momentum-based method, the average black-box attack success rate of the proposed method in single model attack experiments is improved by 38.07%and 27.77%, and the average black-box attack success rate in integrated model attack experiments is improved by and.Rising32.50%and28.63%
Title: Adversarial sample attack method based on loss smoothing
Description:
Deep neural networks (DNNs) are vulnerable to adversarial examples.
Although the existing momentum-based adversarial example generation method can achieve a close 100%white-box attack success rate, it is still not ideal when attacking other models, and the black- box attack success rate is low.
To address this, an adversarial example attack method based on loss smoothing is proposed to improve the transferability of adversarial examples.
In the iterative process of calculating the gradient at each step, the current gradient is not used directly, but the local average gradient is used to accumulate momentum, so as to suppress the local oscillation phenomenon on the loss function surface, thereby stabilizing the update direction and escaping the local extreme point.
A large number of experimental results on the ImageNet dataset show that compared with the existing momentum-based method, the average black-box attack success rate of the proposed method in single model attack experiments is improved by 38.
07%and 27.
77%, and the average black-box attack success rate in integrated model attack experiments is improved by and.
Rising32.
50%and28.
63%.

Related Results

Mitigating Adversarial Attacks Uncertainty Through Interval Analysis
Mitigating Adversarial Attacks Uncertainty Through Interval Analysis
Abstract The adversarial attack is characterized by a high attack success rate and a fast generation of examples. It is widely used in neural network robustness evaluation ...
A novel radial beam smoothing scheme based on optical Kerr effect
A novel radial beam smoothing scheme based on optical Kerr effect
Laser-beam illumination uniformity is a key issue in inertial confinement fusion facilities. In order to fulfill the requirement of improving illumination uniformity, a radial smoo...
Are owners chalk and cheese in the context of dividend smoothing asymmetry?
Are owners chalk and cheese in the context of dividend smoothing asymmetry?
The study analyzes the impact of ownership structure on dividend smoothing via the lens of agency and information asymmetry theory. The study also investigates the impact of owners...
Heuristic Black-Box Adversarial Attacks on Video Recognition Models
Heuristic Black-Box Adversarial Attacks on Video Recognition Models
We study the problem of attacking video recognition models in the black-box setting, where the model information is unknown and the adversary can only make queries to detect the pr...
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
Minimum Adversarial Examples
Minimum Adversarial Examples
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization models: ...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial dete...

Back to Top