Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts

View through CrossRef
<p> Despite recent progress, the susceptibility of machine learning models to adversarial examples remains a challenge —which calls for rethinking the defense strategy. In this paper, we investigate the cause-effect link between adversarial examples and the out-of-distribution (OOD) problem. To that end, we propose Out2In, an OOD generalization method that is resilient to not only adversarial but also natural distribution shifts. Through an OOD to in-distribution mapping intuition that leverages image-to-image translation, Out2In translates OOD inputs to the data distribution used to train/test the model. First, we experimentally confirm that the adversarial examples problem is related to the wider OOD generalization problem. Then, through extensive experiments on three benchmark image datasets (MNIST, CIFAR10, and ImageNet), we show that Out2In consistently improves robustness to OOD adversarial inputs and outperforms state-of-the-art defenses by a significant margin, while preserving the exact accuracy on benign (in-distribution) data. Furthermore, it generalizes on naturally OOD inputs such as darker or sharper images </p>
Institute of Electrical and Electronics Engineers (IEEE)
Title: Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
Description:
<p> Despite recent progress, the susceptibility of machine learning models to adversarial examples remains a challenge —which calls for rethinking the defense strategy.
In this paper, we investigate the cause-effect link between adversarial examples and the out-of-distribution (OOD) problem.
To that end, we propose Out2In, an OOD generalization method that is resilient to not only adversarial but also natural distribution shifts.
Through an OOD to in-distribution mapping intuition that leverages image-to-image translation, Out2In translates OOD inputs to the data distribution used to train/test the model.
First, we experimentally confirm that the adversarial examples problem is related to the wider OOD generalization problem.
Then, through extensive experiments on three benchmark image datasets (MNIST, CIFAR10, and ImageNet), we show that Out2In consistently improves robustness to OOD adversarial inputs and outperforms state-of-the-art defenses by a significant margin, while preserving the exact accuracy on benign (in-distribution) data.
Furthermore, it generalizes on naturally OOD inputs such as darker or sharper images </p>.

Related Results

Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
Out2In: Towards Machine Learning Models Resilient to Adversarial and Natural Distribution Shifts
<p> Despite recent progress, the susceptibility of machine learning models to adversarial examples remains a challenge —which calls for rethinking the defense strategy. In th...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial dete...
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
An Approach to Machine Learning
An Approach to Machine Learning
The process of automatically recognising significant patterns within large amounts of data is called "machine learning." Throughout the last couple of decades, it has evolved into ...
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic&nbsp;
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic&nbsp;
Abstract Background: To minimize the risk of infection during the COVID-19 pandemic, the learning mode of universities in China has been adjusted, and the online learning o...
Determination of Resilient Properties of Unbound Materials with Diametral and Cyclic Triaxial Test Systems
Determination of Resilient Properties of Unbound Materials with Diametral and Cyclic Triaxial Test Systems
Repeated load diametral test systems are experiencing increased use to determine resilient properties of asphalt concrete and admixture stabilized materials. For these materials, t...

Back to Top