Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Bias and Fairness Detection in Dataset using CNN

View through CrossRef
As artificial intelligence (AI) continues to play a growing role in decision making processes across sensitive do- mains such as healthcare, finance, recruitment, and law en- forcement, concerns regarding algorithmic bias and fairness have become increasingly critical. These concerns often originate from imbalanced or biased training data, which can lead to discriminatory outcomes and reduced trust in AI systems. This research presents a CNN based system designed to detect bias and evaluate fairness in datasets before they are used for model training. The proposed system analyzes class distribution and applies statistical fairness metrics to assess whether a dataset is balanced or skewed toward specific outcomes or demographic groups. At its core, the system employs a Convolutional Neural Network (CNN) trained to identify imbalances within the data, particularly in multi class classification scenarios. The model is supported by additional fairness metrics, such as demographic parity and equal opportunity, which provide a comprehensive evaluation of potential bias.To ensure robustness and adaptability, the system was tested on a variety of public datasets as well as two custom designed datasets developed during the course of the project. These custom datasets include encrypted files to reflect real world complexities, such as privacy preserving data formats and secure data handling. The model successfully processed these inputs and provided accurate predictions of fairness and bias.The user friendly interface allows users to upload datasets, view predictions, and understand fairness scores visually, mak- ing the tool suitable for both technical and non technical stakeholders. The system aims to support AI practitioners by offering an early stage evaluation method that improves dataset transparency, increases trust in AI outcomes, and reduces the risk of unintended discrimination.Overall, this research contributes a practical, scalable, and ethical solution for bias detection at the dataset level, serving as a step forward in the broader effort to promote fairness and accountability in artificial intelligence.
Title: Bias and Fairness Detection in Dataset using CNN
Description:
As artificial intelligence (AI) continues to play a growing role in decision making processes across sensitive do- mains such as healthcare, finance, recruitment, and law en- forcement, concerns regarding algorithmic bias and fairness have become increasingly critical.
These concerns often originate from imbalanced or biased training data, which can lead to discriminatory outcomes and reduced trust in AI systems.
This research presents a CNN based system designed to detect bias and evaluate fairness in datasets before they are used for model training.
The proposed system analyzes class distribution and applies statistical fairness metrics to assess whether a dataset is balanced or skewed toward specific outcomes or demographic groups.
At its core, the system employs a Convolutional Neural Network (CNN) trained to identify imbalances within the data, particularly in multi class classification scenarios.
The model is supported by additional fairness metrics, such as demographic parity and equal opportunity, which provide a comprehensive evaluation of potential bias.
To ensure robustness and adaptability, the system was tested on a variety of public datasets as well as two custom designed datasets developed during the course of the project.
These custom datasets include encrypted files to reflect real world complexities, such as privacy preserving data formats and secure data handling.
The model successfully processed these inputs and provided accurate predictions of fairness and bias.
The user friendly interface allows users to upload datasets, view predictions, and understand fairness scores visually, mak- ing the tool suitable for both technical and non technical stakeholders.
The system aims to support AI practitioners by offering an early stage evaluation method that improves dataset transparency, increases trust in AI outcomes, and reduces the risk of unintended discrimination.
Overall, this research contributes a practical, scalable, and ethical solution for bias detection at the dataset level, serving as a step forward in the broader effort to promote fairness and accountability in artificial intelligence.

Related Results

Algorithmic Individual Fairness and Healthcare: A Scoping Review
Algorithmic Individual Fairness and Healthcare: A Scoping Review
AbstractObjectiveStatistical and artificial intelligence algorithms are increasingly being developed for use in healthcare. These algorithms may reflect biases that magnify dispari...
Bertrand Game with Nash Bargaining Fairness Concern
Bertrand Game with Nash Bargaining Fairness Concern
The classical Bertrand game is assumed that players are perfectly rational. However, many empirical researches indicate that people have bounded rational behavior with fairness con...
REAL-TIME DETECTIONS OF OPENED-CLOSED EYES USING CONVOLUTIONAL NEURAL NETWORK
REAL-TIME DETECTIONS OF OPENED-CLOSED EYES USING CONVOLUTIONAL NEURAL NETWORK
The sleepy condition can affect changing behaviors in the human body, and one part of the human body that gets this effect is the eye; eyes are narrower than in normal conditions, ...
Tropical Indian Ocean Mixed Layer Bias in CMIP6 CGCMs Primarily Attributed tothe AGCM Surface Wind Bias
Tropical Indian Ocean Mixed Layer Bias in CMIP6 CGCMs Primarily Attributed tothe AGCM Surface Wind Bias
The relatively weak sea surface temperature bias in the tropical Indian Ocean (TIO) simulated in the coupledgeneral circulation model (CGCM) from the recently released CMIP6 has be...
An Investigation of Bias in Bangla Text Classification Models
An Investigation of Bias in Bangla Text Classification Models
Abstract The rapid growth of natural language processing (NLP) applications has highlighted concerns about fairness and bias in text classification models. Despite signific...
Adaptive radio resource management for ofdma-based macro- and femtocell networks
Adaptive radio resource management for ofdma-based macro- and femtocell networks
Las demandas y expectativas de los usuarios y operadores móviles crecen sin parar y, consecuentemente, los nuevos estándares han incorporado tecnologías de acceso de radio cada vez...
SCOPE-BIAS: SOCIAL CONTEXTUAL OPTIMIZATION FOR EVALUATING BIAS IN AI SYSTEMS
SCOPE-BIAS: SOCIAL CONTEXTUAL OPTIMIZATION FOR EVALUATING BIAS IN AI SYSTEMS
This study introduces SCOPE-Bias (Social Contextual Optimization for Evaluating Bias in AI Systems), an innovative framework aimed at addressing the shortcomings of traditional bia...

Back to Top