Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

EXPLORE: learning interpretable rules for patient-level prediction

View through CrossRef
Abstract Objective We investigate whether a trade-off occurs between predictive performance and model interpretability in real-world health care data and illustrate how to develop clinically optimal decision rules by learning under constraints with the Exhaustive Procedure for Logic-Rule Extraction (EXPLORE) algorithm. Methods We enhanced EXPLORE’s scalability to enable its use with real-world datasets and developed an R package that generates simple decision rules. We compared EXPLORE’s performance to 7 state-of-the-art model algorithms across 5 prediction tasks using data from the Dutch Integrated Primary Care Information (IPCI) database. Additionally, we characterized EXPLORE’s space of near-optimal models (i.e. Rashomon set) and conducted experiments on incorporating domain knowledge and improving existing models. Results The prediction models developed using LASSO, RandomForest, and XGBoost consistently performed best in terms of AUROC, followed by DecisionTree and EXPLORE. However, the decision rules generated by EXPLORE are much simpler (at most 5 predictors) than the aforementioned. GOSDT-G, IHT, and RIPPER performed worse. Moreover, we demonstrated that EXPLORE’s Rashomon set is very large (1,381 − 20,320 models) with a large variability in both the generalizability and model diversity. We then showed there is a potential to find more clinically optimal decision rules using EXPLORE by incorporating domain knowledge (age/sex and task-specific features) or improving existing models (the CHADS2 score). Conclusions Our study shows that more complex models generally outperform simpler ones, confirming the expected interpretability-performance trade-off, although it varies in strength across prediction tasks. EXPLORE’s ability to learn under constraints is valuable for generating clinically optimal decision rules.
Title: EXPLORE: learning interpretable rules for patient-level prediction
Description:
Abstract Objective We investigate whether a trade-off occurs between predictive performance and model interpretability in real-world health care data and illustrate how to develop clinically optimal decision rules by learning under constraints with the Exhaustive Procedure for Logic-Rule Extraction (EXPLORE) algorithm.
Methods We enhanced EXPLORE’s scalability to enable its use with real-world datasets and developed an R package that generates simple decision rules.
We compared EXPLORE’s performance to 7 state-of-the-art model algorithms across 5 prediction tasks using data from the Dutch Integrated Primary Care Information (IPCI) database.
Additionally, we characterized EXPLORE’s space of near-optimal models (i.
e.
Rashomon set) and conducted experiments on incorporating domain knowledge and improving existing models.
Results The prediction models developed using LASSO, RandomForest, and XGBoost consistently performed best in terms of AUROC, followed by DecisionTree and EXPLORE.
However, the decision rules generated by EXPLORE are much simpler (at most 5 predictors) than the aforementioned.
GOSDT-G, IHT, and RIPPER performed worse.
Moreover, we demonstrated that EXPLORE’s Rashomon set is very large (1,381 − 20,320 models) with a large variability in both the generalizability and model diversity.
We then showed there is a potential to find more clinically optimal decision rules using EXPLORE by incorporating domain knowledge (age/sex and task-specific features) or improving existing models (the CHADS2 score).
Conclusions Our study shows that more complex models generally outperform simpler ones, confirming the expected interpretability-performance trade-off, although it varies in strength across prediction tasks.
EXPLORE’s ability to learn under constraints is valuable for generating clinically optimal decision rules.

Related Results

Autonomy on Trial
Autonomy on Trial
Photo by CHUTTERSNAP on Unsplash Abstract This paper critically examines how US bioethics and health law conceptualize patient autonomy, contrasting the rights-based, individualist...
Enhancing Non-Formal Learning Certificate Classification with Text Augmentation: A Comparison of Character, Token, and Semantic Approaches
Enhancing Non-Formal Learning Certificate Classification with Text Augmentation: A Comparison of Character, Token, and Semantic Approaches
Aim/Purpose: The purpose of this paper is to address the gap in the recognition of prior learning (RPL) by automating the classification of non-formal learning certificates using d...
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic 
Initial Experience with Pediatrics Online Learning for Nonclinical Medical Students During the COVID-19 Pandemic 
Abstract Background: To minimize the risk of infection during the COVID-19 pandemic, the learning mode of universities in China has been adjusted, and the online learning o...
Predicting outcomes of smoking cessation interventions in novel scenarios using ontology-informed, interpretable machine learning
Predicting outcomes of smoking cessation interventions in novel scenarios using ontology-informed, interpretable machine learning
Background Systematic reviews of effectiveness estimate the relative average effects of interventions and comparators in a set of existing studies e.g., using rate ratios. However,...
Predicting outcomes of smoking cessation interventions in novel scenarios using ontology-informed, interpretable machine learning
Predicting outcomes of smoking cessation interventions in novel scenarios using ontology-informed, interpretable machine learning
Background Systematic reviews of effectiveness estimate the relative average effects of interventions and comparators in a set of existing studies e.g., using rate ratios. However,...
A soft computing decision support framework for e-learning
A soft computing decision support framework for e-learning
Supported by technological development and its impact on everyday activities, e-Learning and b-Learning (Blended Learning) have experienced rapid growth mainly in higher education ...
Prediction using Machine Learning
Prediction using Machine Learning
This chapter begins with a concise introduction to machine learning and the classification of machine learning systems (supervised learning, unsupervised learning, and reinforcemen...
An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
Recently, some effort went into explaining intransparent and black-box models, such as deep neural networks or random forests. So-called model-agnostic methods typically approximat...

Back to Top