Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Explainable Artificial Intelligence for Medical Science

View through CrossRef
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research. As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases. This promotes the necessity of XAI, as it is illicit as per the General Data Protection Regulations (GDPR) “right to an explanation” to not provide a person with an explanation given a decision reached after algorithmic judgement. The latter is crucial in critical fields such as Healthcare, Finance and Law. For this thesis, the Healthcare field and morespecifically Electronic Health Records are the main focus for the development and application of XAI methods.This thesis offers prospective approaches to enhance the explainability of Electronic Health Records (EHRs). It presents three different perspectives that encompass the Model, Data, and the User, aimed at elevating explainability. The model perspective draws upon improvements to the local explainability of black-box AI methods. The data perspective enables an improvement to the quality of the data provided for AI methods, such that the XAI methods applied to the AI models account for a key property of missingness. Finally, the user perspective provides an accessible form of explainability by allowing less experienced users to have an interface to use both AI and XAI methods.Thereby, this thesis provides new innovative approaches to improve the explanations that are given for EHRs. This is verified through empirical and theoretical analysis of a collection of introduced and existing methods. We propose a selection of XAI methods that collectively build upon current leading literature in the field. Here we propose the methods Polynomial Adaptive Local Explanations (PALE) for patient specific explanations, both Counterfactual-Integrated Gradients (CF-IG) and QuantifiedUncertainty Counterfactual Explanations (QUCE) that utilise counterfactual thinking, Batch-Integrated Gradients (Batch-IG) to address the temporal nature of EHR data and Surrogate Set Imputation (SSI) that addresses missing value imputation. Finally, we propose a tool called ExMed that utilises XAI methods and allows for the ease of access for AI and XAI methods.
Swansea University
Title: Explainable Artificial Intelligence for Medical Science
Description:
Explainable Artificial Intelligence (XAI) is at the forefront of Artificial Intelligence (AI) research.
As the development of AI has become increasingly complex with modern day computational capabilities, the transparency of the AI models decreases.
This promotes the necessity of XAI, as it is illicit as per the General Data Protection Regulations (GDPR) “right to an explanation” to not provide a person with an explanation given a decision reached after algorithmic judgement.
The latter is crucial in critical fields such as Healthcare, Finance and Law.
For this thesis, the Healthcare field and morespecifically Electronic Health Records are the main focus for the development and application of XAI methods.
This thesis offers prospective approaches to enhance the explainability of Electronic Health Records (EHRs).
It presents three different perspectives that encompass the Model, Data, and the User, aimed at elevating explainability.
The model perspective draws upon improvements to the local explainability of black-box AI methods.
The data perspective enables an improvement to the quality of the data provided for AI methods, such that the XAI methods applied to the AI models account for a key property of missingness.
Finally, the user perspective provides an accessible form of explainability by allowing less experienced users to have an interface to use both AI and XAI methods.
Thereby, this thesis provides new innovative approaches to improve the explanations that are given for EHRs.
This is verified through empirical and theoretical analysis of a collection of introduced and existing methods.
We propose a selection of XAI methods that collectively build upon current leading literature in the field.
Here we propose the methods Polynomial Adaptive Local Explanations (PALE) for patient specific explanations, both Counterfactual-Integrated Gradients (CF-IG) and QuantifiedUncertainty Counterfactual Explanations (QUCE) that utilise counterfactual thinking, Batch-Integrated Gradients (Batch-IG) to address the temporal nature of EHR data and Surrogate Set Imputation (SSI) that addresses missing value imputation.
Finally, we propose a tool called ExMed that utilises XAI methods and allows for the ease of access for AI and XAI methods.

Related Results

Assessing Explainable in Artificial Intelligence: A TOPSIS Approach to Decision-Making
Assessing Explainable in Artificial Intelligence: A TOPSIS Approach to Decision-Making
Explainable in Artificial Intelligence (AI) is the ability to comprehend and explain how AI models generate judgments or predictions. The complexity of AI systems, especially machi...
Artificial intelligence in Medical Education-A Crosssectional study in Private Setup
Artificial intelligence in Medical Education-A Crosssectional study in Private Setup
Objective: For healthcare providers, expectations, duties, and job descriptions must change as the information age fades and the artificial intelligence age becomes more prevalent....
New Era’s of Artificial Intelligence in Pharmaceutical Industries
New Era’s of Artificial Intelligence in Pharmaceutical Industries
Artificial Intelligence (AI) is the future of pharmaceutical industries. We make our tasks easier with help of Artificial Intelligence in future. With help of Artificial Intelligen...
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE STANDARDIZATION AND IMPROVEMENT OF NURSING CARE
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON THE STANDARDIZATION AND IMPROVEMENT OF NURSING CARE
Background. The rapid advancement of artificial intelligence technologies and their implementation in medical practice create new opportunities for enhancing the quality of patient...
Artificial Intelligence and Justice: Opportunities and Risks
Artificial Intelligence and Justice: Opportunities and Risks
. The article focuses on the possibility of using artificial intelligence technology in judicial activity and assesses the admissibility of granting artificial intelligence the pow...
“Artificial Intelligence”: The Associative Field of Journalism Students
“Artificial Intelligence”: The Associative Field of Journalism Students
Artificial Intelligence today can be called one of the most discussed phenomena. Meanwhile, the boundaries of this term are extremely broad and blurred. Such breadth of meaning may...
Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment
Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment
Artificial intelligence has significantly enhanced the research paradigm and spectrum with a substantiated promise of continuous applicability in the real world domain. Artificial ...

Back to Top