Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Per-instance Differential Privacy

View through CrossRef
We consider a refinement of differential privacy --- per instance differential privacy (pDP), which captures the privacy of a specific individual with respect to a fixed data set.  We show that this is a strict generalization of the standard DP and inherits all its desirable properties, e.g.,  composition, invariance to side information and closedness to postprocessing, except that they all hold for every instance separately. We consider a refinement of differential privacy --- per instance differential privacy (pDP), which captures the privacy of a specific individual with respect to a fixed data set.  We show that this is a strict generalization of the standard DP and inherits all its desirable properties, e.g.,  composition, invariance to side information and closedness to postprocessing, except that they all hold for every instance separately.  When the data is drawn from a distribution, we show that per-instance DP implies generalization. Moreover, we provide explicit calculations of the per-instance DP for the output perturbation on a class of smooth learning problems. The result reveals an interesting and intuitive fact that an individual has stronger privacy if he/she has small ``leverage score'' with respect to the data set and if he/she can be predicted more accurately using the leave-one-out data set. Simulation shows several orders-of-magnitude more favorable privacy and utility trade-off when we consider the privacy of only the users in the data set. In a case study on differentially private linear regression, provide a novel analysis of the One-Posterior-Sample (OPS) estimator and show that when the data set is well-conditioned it provides $(\epsilon,\delta)$-pDP for any target individuals and matches the exact lower bound up to a $1+\tilde{O}(n^{-1}\epsilon^{-2})$ multiplicative factor.  We also demonstrate how we can use a ``pDP to DP conversion'' step to design AdaOPS which uses adaptive regularization to achieve the same results with $(\epsilon,\delta)$-DP.
Journal of Privacy and Confidentiality
Title: Per-instance Differential Privacy
Description:
We consider a refinement of differential privacy --- per instance differential privacy (pDP), which captures the privacy of a specific individual with respect to a fixed data set.
  We show that this is a strict generalization of the standard DP and inherits all its desirable properties, e.
g.
,  composition, invariance to side information and closedness to postprocessing, except that they all hold for every instance separately.
 We consider a refinement of differential privacy --- per instance differential privacy (pDP), which captures the privacy of a specific individual with respect to a fixed data set.
  We show that this is a strict generalization of the standard DP and inherits all its desirable properties, e.
g.
,  composition, invariance to side information and closedness to postprocessing, except that they all hold for every instance separately.
  When the data is drawn from a distribution, we show that per-instance DP implies generalization.
Moreover, we provide explicit calculations of the per-instance DP for the output perturbation on a class of smooth learning problems.
The result reveals an interesting and intuitive fact that an individual has stronger privacy if he/she has small ``leverage score'' with respect to the data set and if he/she can be predicted more accurately using the leave-one-out data set.
Simulation shows several orders-of-magnitude more favorable privacy and utility trade-off when we consider the privacy of only the users in the data set.
In a case study on differentially private linear regression, provide a novel analysis of the One-Posterior-Sample (OPS) estimator and show that when the data set is well-conditioned it provides $(\epsilon,\delta)$-pDP for any target individuals and matches the exact lower bound up to a $1+\tilde{O}(n^{-1}\epsilon^{-2})$ multiplicative factor.
  We also demonstrate how we can use a ``pDP to DP conversion'' step to design AdaOPS which uses adaptive regularization to achieve the same results with $(\epsilon,\delta)$-DP.

Related Results

Augmented Differential Privacy Framework for Data Analytics
Augmented Differential Privacy Framework for Data Analytics
Abstract Differential privacy has emerged as a popular privacy framework for providing privacy preserving noisy query answers based on statistical properties of databases. ...
Privacy Risk in Recommender Systems
Privacy Risk in Recommender Systems
Nowadays, recommender systems are mostly used in many online applications to filter information and help users in selecting their relevant requirements. It avoids users to become o...
Differential privacy learned index
Differential privacy learned index
Indexes are fundamental components of database management systems, traditionally implemented through structures like B-Tree, Hash, and BitMap indexes. These index structures map ke...
THE SECURITY AND PRIVACY MEASURING SYSTEM FOR THE INTERNET OF THINGS DEVICES
THE SECURITY AND PRIVACY MEASURING SYSTEM FOR THE INTERNET OF THINGS DEVICES
The purpose of the article: elimination of the gap in existing need in the set of clear and objective security and privacy metrics for the IoT devices users and manufacturers and a...
Heterogeneous Differential Privacy
Heterogeneous Differential Privacy
The massive collection of personal data by personalization systems has rendered the preservation of privacy of individuals more and more difficult. Most of the proposed approaches ...
Privacy in online advertising platforms
Privacy in online advertising platforms
Online advertising is consistently considered as the pillar of the "free• content on the Web since it is commonly the funding source of websites. Furthermore, the option of deliver...
Privacy awareness in generative AI: the case of ChatGPT
Privacy awareness in generative AI: the case of ChatGPT
Purpose Generative AI, like ChatGPT, uses large language models that process human language and learn from patterns identified in large data sets. Despite the great benefits offere...
Privacy-Preserving Data Analytics in Internet of Medical Things
Privacy-Preserving Data Analytics in Internet of Medical Things
The healthcare sector has changed dramatically in recent years due to depending more and more on big data to improve patient care, enhance or improve operational effectiveness, and...

Back to Top