Javascript must be enabled to continue!
Comparison of Fully Homomorphic Encryption and Garbled Circuits approaches in Privacy-Preserving Machine Learning
View through CrossRef
Machine Learning (ML) is making its way into fields such as healthcare, finance, and natural language processing (NLP), and concerns over data privacy and model confidentiality continue to grow. Privacy-Preserving Machine Learning (PPML) addresses this challenge by enabling inference on private data without revealing sensitive inputs or proprietary models. Leveraging Secure Computation techniques from Cryptography, two widely studied approaches in this domain are Fully Homomorphic Encryption (FHE) and Garbled Circuits (GC). This thesis presents a comparative evaluation of FHE and GC for secure neural network inference (SNNI). A two-layer neural network (NN) was implemented using the CKKS scheme from the Microsoft SEAL library (FHE) and the TinyGarble2.0 framework (GC) by IntelLabs. Both implementations are evaluated under a semi-honest threat model, measuring inference output error, round-trip time, peak memory usage, communication overhead, and communication rounds. Results reveal a trade-off: modular GC offers faster execution and lower memory consumption, while FHE supports non-interactive inference. The reproducible implementations aid secure model deployments in real-world ML-as-a-Service (MLaaS) settings.
Title: Comparison of Fully Homomorphic Encryption and Garbled Circuits approaches in Privacy-Preserving Machine Learning
Description:
Machine Learning (ML) is making its way into fields such as healthcare, finance, and natural language processing (NLP), and concerns over data privacy and model confidentiality continue to grow.
Privacy-Preserving Machine Learning (PPML) addresses this challenge by enabling inference on private data without revealing sensitive inputs or proprietary models.
Leveraging Secure Computation techniques from Cryptography, two widely studied approaches in this domain are Fully Homomorphic Encryption (FHE) and Garbled Circuits (GC).
This thesis presents a comparative evaluation of FHE and GC for secure neural network inference (SNNI).
A two-layer neural network (NN) was implemented using the CKKS scheme from the Microsoft SEAL library (FHE) and the TinyGarble2.
0 framework (GC) by IntelLabs.
Both implementations are evaluated under a semi-honest threat model, measuring inference output error, round-trip time, peak memory usage, communication overhead, and communication rounds.
Results reveal a trade-off: modular GC offers faster execution and lower memory consumption, while FHE supports non-interactive inference.
The reproducible implementations aid secure model deployments in real-world ML-as-a-Service (MLaaS) settings.
Related Results
Development Paillier's library of fully homomorphic encryption
Development Paillier's library of fully homomorphic encryption
One of the new areas of cryptography considered-homomorphic cryptography. The article presents the main areas of application of homomorphic encryption. An analysis of existing deve...
Power of Homomorphic Encryption in Secure Data Processing
Power of Homomorphic Encryption in Secure Data Processing
Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. This paper presents a detailed discuss...
Homomorphic Encryption and its Application to Blockchain
Homomorphic Encryption and its Application to Blockchain
The concept, method, algorithm and application of the advanced field of cryptography, homomorphic encryption, as well as its application to the field of blockchain are discussed in...
Leveraging Searchable Encryption through Homomorphic Encryption: A Comprehensive Analysis
Leveraging Searchable Encryption through Homomorphic Encryption: A Comprehensive Analysis
The widespread adoption of cloud infrastructures has revolutionized data storage and access. However, it has also raised concerns regarding the privacy of sensitive data. To addres...
Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning
Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning
Privacy protection has been an important concern with the great success of machine learning. In this paper, it proposes a multi-party privacy preserving machine learning framework,...
Segmented encryption algorithm for privacy and net neutrality in distributed cloud systems
Segmented encryption algorithm for privacy and net neutrality in distributed cloud systems
The advent of distributed cloud systems has revolutionized data storage and access, providing flexibility and scalability across various industries. However, these benefits come wi...
Secure Federated Learning with a Homomorphic Encryption Model
Secure Federated Learning with a Homomorphic Encryption Model
Federated learning (FL) offers collaborative machine learning across decentralized devices while safeguarding data privacy. However, data security and privacy remain key concerns. ...
Privacy Preserving Image Retrieval Using Multi-Key Random Projection Encryption and Machine Learning Decryption
Privacy Preserving Image Retrieval Using Multi-Key Random Projection Encryption and Machine Learning Decryption
Homomorphic Encryption (HE), Multiparty Computation (MPC), Differential Privacy (DP) and Random Projection (RP) have been used in privacy preserving computing. The main benefit of ...

