Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Federated learning and differential privacy: Machine learning and deep learning for biomedical image data classification

View through CrossRef
Background The integration of differential privacy and federated learning in healthcare is key for maintaining patient confidentiality while ensuring accurate predictive modeling. With increasing concerns about privacy, it is essential to explore methods that protect data privacy without compromising model performance. Objective This study evaluates the effectiveness of feedforward neural networks (FNNs), Gaussian processes (GPs), and a subset of deep learning neural networks (MLP) in classifying biomedical image data, incorporating federated learning to enhance privacy preservation. Method We implemented FNN, GP, and MLP models using federated learning and differential privacy techniques. Models were evaluated based on training and validation accuracy, correlation coefficients, mean absolute error (MAE), root mean squared error (RMSE), and relative errors, including relative absolute error (RAE) and relative root squared error (RRSE). Results The FNN achieved 86.49% training accuracy and 82.08% overall accuracy but showed potential overfitting with 68.75% validation accuracy. The GP model had a correlation coefficient of 0.9741, a MAE of 108.38, and a RMSE of 173.49. The DNN outperformed the other models with a correlation coefficient of 0.9980, a MAE of 36.80, and a RMSE of 51.01. Federated learning improved privacy while maintaining model performance. Conclusion Federated learning with differential privacy offers a promising solution for secure and accurate biomedical image classification, supporting privacy-preserving machine learning in medical diagnostics without compromising performance.
Title: Federated learning and differential privacy: Machine learning and deep learning for biomedical image data classification
Description:
Background The integration of differential privacy and federated learning in healthcare is key for maintaining patient confidentiality while ensuring accurate predictive modeling.
With increasing concerns about privacy, it is essential to explore methods that protect data privacy without compromising model performance.
Objective This study evaluates the effectiveness of feedforward neural networks (FNNs), Gaussian processes (GPs), and a subset of deep learning neural networks (MLP) in classifying biomedical image data, incorporating federated learning to enhance privacy preservation.
Method We implemented FNN, GP, and MLP models using federated learning and differential privacy techniques.
Models were evaluated based on training and validation accuracy, correlation coefficients, mean absolute error (MAE), root mean squared error (RMSE), and relative errors, including relative absolute error (RAE) and relative root squared error (RRSE).
Results The FNN achieved 86.
49% training accuracy and 82.
08% overall accuracy but showed potential overfitting with 68.
75% validation accuracy.
The GP model had a correlation coefficient of 0.
9741, a MAE of 108.
38, and a RMSE of 173.
49.
The DNN outperformed the other models with a correlation coefficient of 0.
9980, a MAE of 36.
80, and a RMSE of 51.
01.
Federated learning improved privacy while maintaining model performance.
Conclusion Federated learning with differential privacy offers a promising solution for secure and accurate biomedical image classification, supporting privacy-preserving machine learning in medical diagnostics without compromising performance.

Related Results

Augmented Differential Privacy Framework for Data Analytics
Augmented Differential Privacy Framework for Data Analytics
Abstract Differential privacy has emerged as a popular privacy framework for providing privacy preserving noisy query answers based on statistical properties of databases. ...
Federated Data Linkage in Practice
Federated Data Linkage in Practice
In recent years, great strides have been made towards the deployment of federated systems for data research, including exploring federated trusted research environments (TREs). The...
Differential privacy learned index
Differential privacy learned index
Indexes are fundamental components of database management systems, traditionally implemented through structures like B-Tree, Hash, and BitMap indexes. These index structures map ke...
Distributed Learning for Heart Disease Risk Prediction Based on Key Clinical Parameters with Evaluation Metrics Analysis
Distributed Learning for Heart Disease Risk Prediction Based on Key Clinical Parameters with Evaluation Metrics Analysis
Abstract The purpose of this study design and test a Decentralized Federated learning framework that integrates a Mutual Learning approach with a Hierarchical Dirichlet Pro...
Image-based crop disease detection with federated learning
Image-based crop disease detection with federated learning
Abstract Crop disease detection and management is critical to improving productivity, reducing costs, and promoting environmentally friendly crop treatment methods. Modern ...
Privacy Risk in Recommender Systems
Privacy Risk in Recommender Systems
Nowadays, recommender systems are mostly used in many online applications to filter information and help users in selecting their relevant requirements. It avoids users to become o...
ML-Powered Privacy Preservation in Biomedical Data Sharing
ML-Powered Privacy Preservation in Biomedical Data Sharing
The sharing of biomedical data is essential for accelerating healthcare research, fostering medical innovation, and improving patient outcomes. Such data encompasses a wide range o...
THE SECURITY AND PRIVACY MEASURING SYSTEM FOR THE INTERNET OF THINGS DEVICES
THE SECURITY AND PRIVACY MEASURING SYSTEM FOR THE INTERNET OF THINGS DEVICES
The purpose of the article: elimination of the gap in existing need in the set of clear and objective security and privacy metrics for the IoT devices users and manufacturers and a...

Back to Top