Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Adversarial Anomaly Explanation

View through CrossRef
Abstract Given a data set and one single object known to be anomalous beforehand, the outlier explanation problem consists in explaining the abnormality of the input object with respect to the data set population. The approach pursued in this paper to solve the above task consists in finding an explanation, namely, a piece of information encoding the characteristics that locate the anomalous data object far from the normal data. Our explanation consists of two components, the choice, encoding the set of features in which the anomalous object deviates from the rest of the population, and the mask, encoding the associated amount of deviation with respect to the normality. The goal here is not to explain the decisional process of a model but, rather, to provide an explanation justifying the output of the decisional process by only inspecting the data set on which the decision has been made. We tackle this problem by introducing an innovative deep learning architecture, called MMOAM, based on the adversarial learning paradigm. We assess the effectiveness of our technique over both synthetic and real data sets and compare it against state of the art outlier explanation methods reporting better performances in different scenarios.
Title: Adversarial Anomaly Explanation
Description:
Abstract Given a data set and one single object known to be anomalous beforehand, the outlier explanation problem consists in explaining the abnormality of the input object with respect to the data set population.
The approach pursued in this paper to solve the above task consists in finding an explanation, namely, a piece of information encoding the characteristics that locate the anomalous data object far from the normal data.
Our explanation consists of two components, the choice, encoding the set of features in which the anomalous object deviates from the rest of the population, and the mask, encoding the associated amount of deviation with respect to the normality.
The goal here is not to explain the decisional process of a model but, rather, to provide an explanation justifying the output of the decisional process by only inspecting the data set on which the decision has been made.
We tackle this problem by introducing an innovative deep learning architecture, called MMOAM, based on the adversarial learning paradigm.
We assess the effectiveness of our technique over both synthetic and real data sets and compare it against state of the art outlier explanation methods reporting better performances in different scenarios.

Related Results

Causal explanation
Causal explanation
An explanation is an answer to a why-question, and so a causal explanation is an answer to ‘Why X?’ that says something about the causes of X. For example, ‘Because it rained’ as a...
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
Improving Adversarial Robustness via Finding Flat Minimum of the Weight Loss Landscape
<p>Recent studies have shown that robust overfitting and robust generalization gap are a major trouble in adversarial training of deep neural networks. These interesting prob...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial dete...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
A systematic survey: role of deep learning-based image anomaly detection in industrial inspection contexts
A systematic survey: role of deep learning-based image anomaly detection in industrial inspection contexts
Industrial automation is rapidly evolving, encompassing tasks from initial assembly to final product quality inspection. Accurate anomaly detection is crucial for ensuring the reli...
Anomaly Detection in Individual Specific Networks through Explainable Generative Adversarial Attributed Networks
Anomaly Detection in Individual Specific Networks through Explainable Generative Adversarial Attributed Networks
Recently, the availability of many omics data source has given the rise of modelling biological networks for each individual or patient. Such networks are able to represent individ...
Proportion of structural congenital anomaly in eastern Africa; A systematic review and meta-analysis
Proportion of structural congenital anomaly in eastern Africa; A systematic review and meta-analysis
Introduction: Birth of abnormal child is a stressful situation for mothers and for the society. Globally, about 8 million children were born each year with congenital abnormalities...

Back to Top