Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks

View through CrossRef
AbstractUncertainty estimation has received momentous consideration in applied machine learning to capture model uncertainty. For instance, the Monte-Carlo dropout method (MC-dropout), an approximated Bayesian approach, has gained intensive attention in producing model uncertainty due to its simplicity and efficiency. However, MC-dropout has revealed shortcomings in capturing erroneous predictions lying in the overlapping classes. Such predictions underlie noisy data points that can neither be reduced by more training data nor detected by model uncertainty. On the other hand, Monte-Carlo based on adversarial attacks (MC-AA), an outstanding method, performs perturbations on the inputs using the adversarial attack idea to capture model uncertainty. This method admittedly mitigates the shortcomings of the previous methods by capturing wrong labels in overlapping regions. Motivated by this method that was only validated with neural networks, we sought to apply MC-AA on various graph neural network models to obtain uncertainties using two public real-world graph datasets known as Elliptic and GitHub. First, we perform binary node classifications, then we apply MC-AA and other recent uncertainty estimation methods to capture the uncertainty of the models. Uncertainty evaluation metrics are computed to evaluate and compare the performance of the uncertainty of the model. We highlight the efficacy of MC-AA in capturing uncertainties in graph neural networks wherein MC-AA outperforms other given methods.
Springer Science and Business Media LLC
Title: Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks
Description:
AbstractUncertainty estimation has received momentous consideration in applied machine learning to capture model uncertainty.
For instance, the Monte-Carlo dropout method (MC-dropout), an approximated Bayesian approach, has gained intensive attention in producing model uncertainty due to its simplicity and efficiency.
However, MC-dropout has revealed shortcomings in capturing erroneous predictions lying in the overlapping classes.
Such predictions underlie noisy data points that can neither be reduced by more training data nor detected by model uncertainty.
On the other hand, Monte-Carlo based on adversarial attacks (MC-AA), an outstanding method, performs perturbations on the inputs using the adversarial attack idea to capture model uncertainty.
This method admittedly mitigates the shortcomings of the previous methods by capturing wrong labels in overlapping regions.
Motivated by this method that was only validated with neural networks, we sought to apply MC-AA on various graph neural network models to obtain uncertainties using two public real-world graph datasets known as Elliptic and GitHub.
First, we perform binary node classifications, then we apply MC-AA and other recent uncertainty estimation methods to capture the uncertainty of the models.
Uncertainty evaluation metrics are computed to evaluate and compare the performance of the uncertainty of the model.
We highlight the efficacy of MC-AA in capturing uncertainties in graph neural networks wherein MC-AA outperforms other given methods.

Related Results

Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Abstract An important goal of geostatistical modeling is to assess output uncertainty after processing realizations through a transfer function, in particular, to...
Enhancing Adversarial Robustness through Stable Adversarial Training
Enhancing Adversarial Robustness through Stable Adversarial Training
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their pred...
Deception-Based Security Framework for IoT: An Empirical Study
Deception-Based Security Framework for IoT: An Empirical Study
<p><b>A large number of Internet of Things (IoT) devices in use has provided a vast attack surface. The security in IoT devices is a significant challenge considering c...
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
On the role of network dynamics for information processing in artificial and biological neural networks
On the role of network dynamics for information processing in artificial and biological neural networks
Understanding how interactions in complex systems give rise to various collective behaviours has been of interest for researchers across a wide range of fields. However, despite ma...
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
Adversarial attacks on deepfake detection: Assessing vulnerability and robustness in video-based models
The increasing prevalence of deepfake media has led to significant advancements in detection models, but these models remain vulnerable to adversarial attacks that exploit weakness...
Improving Diversity and Quality of Adversarial Examples in Adversarial Transformation Network
Improving Diversity and Quality of Adversarial Examples in Adversarial Transformation Network
Abstract This paper proposes a method to mitigate two major issues of Adversarial Transformation Networks (ATN) including the low diversity and the low quality of adversari...
CG-TGAN: Conditional Generative Adversarial Networks with Graph Neural Networks for Tabular Data Synthesizing
CG-TGAN: Conditional Generative Adversarial Networks with Graph Neural Networks for Tabular Data Synthesizing
Data sharing is necessary for AI to be widely used, but sharing sensitive data with others with privacy is risky. To solve these problems, it is necessary to synthesize realistic t...

Back to Top