Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach

View through CrossRef
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.e., changing the inputs in the set changes the decision) and is irreducible (i.e., changing any subset of the inputs does not change the decision). We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context. We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.g., Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions. Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions. We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.
Title: Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach
Description:
We examine counterfactual explanations for explaining the decisions made by model-based AI systems.
The counterfactual approach we consider defines an explanation as a set of the system’s data inputs that causally drives the decision (i.
e.
, changing the inputs in the set changes the decision) and is irreducible (i.
e.
, changing any subset of the inputs does not change the decision).
We (1) demonstrate how this framework may be used to provide explanations for decisions made by general data-driven AI systems that can incorporate features with arbitrary data types and multiple predictive models, and (2) propose a heuristic procedure to find the most useful explanations depending on the context.
We then contrast counterfactual explanations with methods that explain model predictions by weighting features according to their importance (e.
g.
, Shapley additive explanations [SHAP], local interpretable model-agnostic explanations [LIME]) and present two fundamental reasons why we should carefully consider whether importance-weight explanations are well suited to explain system decisions.
Specifically, we show that (1) features with a large importance weight for a model prediction may not affect the corresponding decision, and (2) importance weights are insufficient to communicate whether and how features influence decisions.
We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate conditions under which counterfactual explanations explain data-driven decisions better than importance weights.

Related Results

Data Augmentation using Counterfactuals: Proximity vs Diversity
Data Augmentation using Counterfactuals: Proximity vs Diversity
Counterfactual explanations are gaining in popularity as a way of explaining machine learning models. Counterfactual examples are generally created to help interpret the decision o...
Downward counterfactual insights into weather extremes
Downward counterfactual insights into weather extremes
<p>There are many regions where the duration of reliable scientific observations of key weather hazard variables, such as rainfall and wind speed, is of the order of ...
Autonomy on Trial
Autonomy on Trial
Photo by CHUTTERSNAP on Unsplash Abstract This paper critically examines how US bioethics and health law conceptualize patient autonomy, contrasting the rights-based, individualist...
Counterfactual Shapley Values for Explaining Reinforcement Learning
Counterfactual Shapley Values for Explaining Reinforcement Learning
Abstract This paper introduces an approach based on Counterfactual Shapley Values, which enhances explainability in reinforcement learning by integrating counterfactual a...
The Asymmetry of Counterfactual Dependence
The Asymmetry of Counterfactual Dependence
A certain type of counterfactual is thought to be intimately related to causation, control, and explanation. The time asymmetry of these phenomena therefore plausibly arises from a...
Counterfactual Reasoning in Children: Evidence from an Eye-Tracking Study with Turkish-Speakers
Counterfactual Reasoning in Children: Evidence from an Eye-Tracking Study with Turkish-Speakers
Previous research has produced mixed results regarding the ability of children as young as four years of age to engage in counterfactual reasoning. In this study, we employed a vis...
Multi-Class Counterfactual Explanations using Support Vector Data Description
Multi-Class Counterfactual Explanations using Support Vector Data Description
<p>Explainability is becoming increasingly crucial in machine learning studies and, as the complexity of the model increases, so does the complexity of its explanation. Howev...
Topic-Aware Causal Intervention for Counterfactual Detection
Topic-Aware Causal Intervention for Counterfactual Detection
Counterfactual statements, which describe events that did not or cannot take place, are beneficial to numerous NLP applications. Hence, we consider the problem of counterfactual de...

Back to Top