Javascript must be enabled to continue!
Confabulated Explanations
View through CrossRef
Abstract
In this chapter, the author argues that the ill-grounded explanations agents sincerely offer for their choices have the potential for epistemic innocence. Such explanations are not based on evidence about the causes of the agents’ behaviour and typically turn out to be inaccurate. That is because agents tend to underestimate the role of priming effects, implicit biases, and basic emotional reactions in their decision making. However, offering explanations for their choices, even when the explanations are ill-grounded, enables them to share information about their choices with peers, facilitating peer feedback and self-reflection. Moreover, by providing plausible explanations for their behaviour—rather than acknowledging the influence of factors that cannot be easily controlled—agents preserve a sense of themselves as competent and largely coherent decision makers, which can improve their decision making.
Title: Confabulated Explanations
Description:
Abstract
In this chapter, the author argues that the ill-grounded explanations agents sincerely offer for their choices have the potential for epistemic innocence.
Such explanations are not based on evidence about the causes of the agents’ behaviour and typically turn out to be inaccurate.
That is because agents tend to underestimate the role of priming effects, implicit biases, and basic emotional reactions in their decision making.
However, offering explanations for their choices, even when the explanations are ill-grounded, enables them to share information about their choices with peers, facilitating peer feedback and self-reflection.
Moreover, by providing plausible explanations for their behaviour—rather than acknowledging the influence of factors that cannot be easily controlled—agents preserve a sense of themselves as competent and largely coherent decision makers, which can improve their decision making.
Related Results
Explanation in history and social science
Explanation in history and social science
Historians and social scientists explain at least two sorts of things: (a) those individual human actions that have historical or social significance, such as Stalin’s decision to ...
Counterfactual Models for Fair and Adequate Explanations
Counterfactual Models for Fair and Adequate Explanations
Recent efforts have uncovered various methods for providing explanations that can help interpret the behavior of machine learning programs. Exact explanations with a rigorous logic...
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach
We examine counterfactual explanations for explaining the decisions made by model-based AI systems. The counterfactual approach we consider defines an explanation as a set of the s...
(Conceptual) explanations in logic
(Conceptual) explanations in logic
Abstract
To explain phenomena in the world is a central human activity and one of the main goals of rational inquiry. There are several types of explanation: one can...
Biasing Rule-Based Explanations Towards User Preferences
Biasing Rule-Based Explanations Towards User Preferences
With the growing prevalence of Explainable AI (XAI), the effectiveness, transparency, usefulness, and trustworthiness of explanations have come into focus. However, recent work in ...
Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations
Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations
With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents ...
Diffusion Counterfactuals for Image Regressors
Diffusion Counterfactuals for Image Regressors
Abstract
Counterfactual explanations have been successfully applied to create human interpretable explanations for various black-box models. They are handy for tasks in t...
On Computing Explanations in Abstract Argumentation
On Computing Explanations in Abstract Argumentation
Argumentation can be viewed as a process of generating explanations. We propose a new argumentation semantics, related admissibility, for closely capturing explanations in Abstract...

