How Can We Analyze Differentially-Private Synthetic Datasets?

Anne-Sophie Charest, Carnegie Mellon University

Paper to appear in a future issue of the Journal of Privacy and Confidentiality.

Abstract or Description

Synthetic datasets generated within the multiple imputation framework are now commonly used by statistical agencies to protect the confidentiality of their respondents. More recently, researchers have also proposed techniques to generate synthetic datasets which offer the formal guarantee of differential privacy. While combining rules were derived for the first type of synthetic datasets, little has been said on the analysis of differentially-private synthetic datasets generated with multiple imputations. In this paper, we show that we can not use the usual combining rules to analyze synthetic datasets which have been generated to achieve differential privacy. We consider specifically the case of generating synthetic count data with the beta-binomial synthetizer, and illustrate our discussion with simulation results. We also propose, as a simple alternative, a Bayesian model which models explicitly the mechanism for synthetic data generation.