Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Navigating Fairness in AI-based Prediction Models: Theoretical Constructs and Practical Applications

View through CrossRef
AbstractArtificial Intelligence (AI)-based prediction models, including risk scoring systems and decision support systems, are increasingly adopted in healthcare. Addressing AI fairness is essential to fighting health disparities and achieving equitable performance and patient outcomes. Numerous and conflicting definitions of fairness complicate this effort. This paper aims to structure the transition of AI fairness from theory to practical application with appropriate fairness metrics. For 27 definitions of fairness identified in the recent literature, we assess the relation with the model’s intended use, type of decision influenced and ethical principles of distributive justice. We advocate that due to limitations in some notions of fairness, clinical utility, performance-based metrics (area under the receiver operating characteristic curve), calibration, and statistical parity are the most relevant group-based metrics for medical applications. Through two use cases, we demonstrate that different metrics may be applicable depending on the intended use and ethical framework. Our approach provides a foundation for AI developers and assessors by assessing model fairness and the impact of bias mitigation strategies, hence promoting more equitable AI-based implementations.
Title: Navigating Fairness in AI-based Prediction Models: Theoretical Constructs and Practical Applications
Description:
AbstractArtificial Intelligence (AI)-based prediction models, including risk scoring systems and decision support systems, are increasingly adopted in healthcare.
Addressing AI fairness is essential to fighting health disparities and achieving equitable performance and patient outcomes.
Numerous and conflicting definitions of fairness complicate this effort.
This paper aims to structure the transition of AI fairness from theory to practical application with appropriate fairness metrics.
For 27 definitions of fairness identified in the recent literature, we assess the relation with the model’s intended use, type of decision influenced and ethical principles of distributive justice.
We advocate that due to limitations in some notions of fairness, clinical utility, performance-based metrics (area under the receiver operating characteristic curve), calibration, and statistical parity are the most relevant group-based metrics for medical applications.
Through two use cases, we demonstrate that different metrics may be applicable depending on the intended use and ethical framework.
Our approach provides a foundation for AI developers and assessors by assessing model fairness and the impact of bias mitigation strategies, hence promoting more equitable AI-based implementations.

Related Results

Algorithmic Individual Fairness and Healthcare: A Scoping Review
Algorithmic Individual Fairness and Healthcare: A Scoping Review
AbstractObjectiveStatistical and artificial intelligence algorithms are increasingly being developed for use in healthcare. These algorithms may reflect biases that magnify dispari...
Bertrand Game with Nash Bargaining Fairness Concern
Bertrand Game with Nash Bargaining Fairness Concern
The classical Bertrand game is assumed that players are perfectly rational. However, many empirical researches indicate that people have bounded rational behavior with fairness con...
Adaptive radio resource management for ofdma-based macro- and femtocell networks
Adaptive radio resource management for ofdma-based macro- and femtocell networks
Las demandas y expectativas de los usuarios y operadores móviles crecen sin parar y, consecuentemente, los nuevos estándares han incorporado tecnologías de acceso de radio cada vez...
Fair Allocation of Network Resources for Internet Users
Fair Allocation of Network Resources for Internet Users
In a commercial Internet, the traffic behavior is determined by the contracts between the ISPs and the users, where a user can be a dial-up user, or one corporate network or a grou...
Fairness and Justice in Language Assessment
Fairness and Justice in Language Assessment
The concept of fairness , as related to assessment and assessment practice, has been debated regularly since the late 1980s, but disagreements have regularl...
An Investigation of Bias in Bangla Text Classification Models
An Investigation of Bias in Bangla Text Classification Models
Abstract The rapid growth of natural language processing (NLP) applications has highlighted concerns about fairness and bias in text classification models. Despite signific...
Bias and Fairness Detection in Dataset using CNN
Bias and Fairness Detection in Dataset using CNN
As artificial intelligence (AI) continues to play a growing role in decision making processes across sensitive do- mains such as healthcare, finance, recruitment, and law en- force...

Back to Top