Javascript must be enabled to continue!
Quantifying uncertainty in graph neural network explanations
View through CrossRef
In recent years, analyzing the explanation for the prediction of Graph Neural Networks (GNNs) has attracted increasing attention. Despite this progress, most existing methods do not adequately consider the inherent uncertainties stemming from the randomness of model parameters and graph data, which may lead to overconfidence and misguiding explanations. However, it is challenging for most of GNN explanation methods to quantify these uncertainties since they obtain the prediction explanation in a post-hoc and model-agnostic manner without considering the randomness of graph data and model parameters. To address the above problems, this paper proposes a novel uncertainty quantification framework for GNN explanations. For mitigating the randomness of graph data in the explanation, our framework accounts for two distinct data uncertainties, allowing for a direct assessment of the uncertainty in GNN explanations. For mitigating the randomness of learned model parameters, our method learns the parameter distribution directly from the data, obviating the need for assumptions about specific distributions. Moreover, the explanation uncertainty within model parameters is also quantified based on the learned parameter distributions. This holistic approach can integrate with any post-hoc GNN explanation methods. Empirical results from our study show that our proposed method sets a new standard for GNN explanation performance across diverse real-world graph benchmarks.
Frontiers Media SA
Title: Quantifying uncertainty in graph neural network explanations
Description:
In recent years, analyzing the explanation for the prediction of Graph Neural Networks (GNNs) has attracted increasing attention.
Despite this progress, most existing methods do not adequately consider the inherent uncertainties stemming from the randomness of model parameters and graph data, which may lead to overconfidence and misguiding explanations.
However, it is challenging for most of GNN explanation methods to quantify these uncertainties since they obtain the prediction explanation in a post-hoc and model-agnostic manner without considering the randomness of graph data and model parameters.
To address the above problems, this paper proposes a novel uncertainty quantification framework for GNN explanations.
For mitigating the randomness of graph data in the explanation, our framework accounts for two distinct data uncertainties, allowing for a direct assessment of the uncertainty in GNN explanations.
For mitigating the randomness of learned model parameters, our method learns the parameter distribution directly from the data, obviating the need for assumptions about specific distributions.
Moreover, the explanation uncertainty within model parameters is also quantified based on the learned parameter distributions.
This holistic approach can integrate with any post-hoc GNN explanation methods.
Empirical results from our study show that our proposed method sets a new standard for GNN explanation performance across diverse real-world graph benchmarks.
Related Results
Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Reserves Uncertainty Calculation Accounting for Parameter Uncertainty
Abstract
An important goal of geostatistical modeling is to assess output uncertainty after processing realizations through a transfer function, in particular, to...
Domination of Polynomial with Application
Domination of Polynomial with Application
In this paper, .We .initiate the study of domination. polynomial , consider G=(V,E) be a simple, finite, and directed graph without. isolated. vertex .We present a study of the Ira...
Abstract 902: Explainable AI: Graph machine learning for response prediction and biomarker discovery
Abstract 902: Explainable AI: Graph machine learning for response prediction and biomarker discovery
Abstract
Accurately predicting drug sensitivity and understanding what is driving it are major challenges in drug discovery. Graphs are a natural framework for captu...
Sampling Space of Uncertainty Through Stochastic Modelling of Geological Facies
Sampling Space of Uncertainty Through Stochastic Modelling of Geological Facies
Abstract
The way the space of uncertainty should be sampled from reservoir models is an essential point for discussion that can have a major impact on the assessm...
CG-TGAN: Conditional Generative Adversarial Networks with Graph Neural Networks for Tabular Data Synthesizing
CG-TGAN: Conditional Generative Adversarial Networks with Graph Neural Networks for Tabular Data Synthesizing
Data sharing is necessary for AI to be widely used, but sharing sensitive data with others with privacy is risky.
To solve these problems, it is necessary to synthesize realistic t...
Fuzzy Chaotic Neural Networks
Fuzzy Chaotic Neural Networks
An understanding of the human brain’s local function has improved in recent years. But the cognition of human brain’s working process as a whole is still obscure. Both fuzzy logic ...
The Complexity of Pencil Graph and Line Pencil Graph
The Complexity of Pencil Graph and Line Pencil Graph
Let ???? be a linked and undirected graph. Every linked graph ???? must contain a spanning tree ????, which is a subgraph of ????that is a tree and contain all the nodes of ????. T...
MSRHNN:Multidimensional Social Relation under Heterogeneous Neural Network for Recommendation
MSRHNN:Multidimensional Social Relation under Heterogeneous Neural Network for Recommendation
Abstract
With the growing popularity of mobile smart devices and the availability of 4G and 5G networks, social recommendation systems have become a hot research topic for ...


