STATISTICAL THEORY OF DIFFERENTIALLY PRIVATE MARGINAL-BASED DATA SYNTHESIS ALGORITHMS

Abstract

Marginal-based methods achieve promising performance in the synthetic data competition hosted by the National Institute of Standards and Technology (NIST). To deal with high-dimensional data, the distribution of synthetic data is represented by a probabilistic graphical model (e.g., a Bayesian network), while the raw data distribution is approximated by a collection of low-dimensional marginals. Differential privacy (DP) is guaranteed by introducing random noise to each lowdimensional marginal distribution. Despite its promising performance in practice, the statistical properties of marginal-based methods are rarely studied in the literature. In this paper, we study DP data synthesis algorithms based on Bayesian networks (BN) from a statistical perspective. We establish a rigorous accuracy guarantee for BN-based algorithms, where the errors are measured by the total variation (TV) distance or the L 2 distance. Related to downstream machine learning tasks, an upper bound for the utility error of the DP synthetic data is also derived. To complete the picture, we establish a lower bound for TV accuracy that holds for every ϵ-DP synthetic data generator.

1. INTRODUCTION

In recent years, the problem of privacy-preserving data analysis has become increasingly important and differential privacy (Dwork et al., 2006) appears as the foundation of data privacy. Differential privacy (DP) techniques are widely adopted by industrial companies and the U.S. Census Bureau (Johnson et al., 2017; Erlingsson et al., 2014; Nguyên et al., 2016;  The U.S. Census Bureau, 2020; Abowd, 2018) . One important method to protect data privacy is differentially private data synthesis (DPDS). In the setting of DPDS, a synthetic dataset is generated by some DP data synthesis algorithms from a real dataset. Then, one can release the synthetic dataset and the real dataset will be protected. Recently, National Institutes of Standards and Technology (NIST) organized the differential privacy synthetic data competition (NIST, 2018; 2019; 2020 -2021) . In the NIST competition, the state-ofthe-art algorithms are marginal-based (McKenna et al., 2021) , where the synthetic dataset is drawn from a noisy marginal distribution estimated by the real dataset. To deal with high-dimensional data, the distribution is usually modeled by the probabilistic graphical model (PGM) such as the Bayesian networks or Markov random fields (Jordan, 1999; Wainwright et al., 2008; Zhang et al., 2017; Mckenna et al., 2019; Cai et al., 2021) . Despite its empirical success in releasing high-dimensional data, as far as we know, the theoretical guarantee of marginal-based DPDS approaches is rarely studied in literature. In this paper, we focus on a DPDS algorithm based on the Bayesian networks (BN) known as the PrivBayes (Zhang et al., 2017) that is widely used in synthesizing sparse data (sparsity measured by the degree of a BN that will be defined later). A BN is a directed acyclic graph where each vertex is a low-dimensional marginal distribution and each edge is the conditional distribution between two vertices. It approximates the high-dimensional distribution of the raw data with a set of well-chosen low-dimensional distributions. Random noise is added to each low-dimensional marginal to achieve differential privacy. We aim to analyze the marginal-based approach from a statistical perspective and measure the accuracy of PrivBayes under different statistical distances including the total variation distance or the L 2 distance. Another metric of synthetic data we are interested in is the utility metric related to downstream machine learning tasks. Empirical evaluation of synthetic data in downstream machine learning tasks is widely studied in literature. Existing utility metrics include Train on Synthetic data and Test on Real data (TSTR, (Esteban et al., 2017) ) and Synthetic Ranking Agreement (SRA, (Jordon et al., 2018) ). To our best knowledge, most of these utility evaluation methods are empirical without a theoretical guarantee. Establishing the statistical learning theory of synthetic data is another concern of this paper. Precisely, we focus on the statistical theory of PrivBayes based on the TSTR error. Our contributions. Our contributions are three-fold. First, we theoretically analyze the marginalbased synthetic data generation and derive an upper bound on the TV distance and L 2 distance between real data and synthetic data. The upper bounds show that the Bayesian network structure mitigates the "curse of dimensionality". An upper bound for the sparsity of real data is also derived from the accuracy bounds. Second, we evaluate the utility of the synthetic data from downstream supervised learning tasks theoretically. Precisely, we bound the TSTR error between the predictors trained on real data and synthetic data. Third, we establish a lower bound for the TV distance between the synthetic data distribution and the real data distribution.

1.1. RELATED WORKS AND COMPARISONS

Broadly speaking, our work is related to a vast body of work in differential privacy (Dinur & Nissim, 2003; Dwork & Nissim, 2004; Blum et al., 2005; Dwork et al., 2007; Nissim et al., 2007; Barak et al., 2007; McSherry & Talwar, 2007; Machanavajjhala et al., 2008; Dwork et al., 2015) . For example, McSherry & Talwar (2007) proposed the exponential mechanism that is widely used in practice. Machanavajjhala et al. (2008) discussed privacy for histogram data by sampling from the perturbed cell probabilities. However, these methods are not efficient for releasing high-dimensional tabular data, since the domain size grows exponentially in the dimension (which is known as "the curse of dimensionality"). The state-of-art method for this problem is the marginal-based approach (Zhang et al., 2017; Qardaji et al., 2014; Zhang et al., 2021) . Zhang et al. ( 2017) approximated the raw dataset by a sparse Bayesian network and then added noise to each vertex in the graph. Zhang et al. ( 2021) selected a collection of 2-way marginals and a gradually updating method was applied to release synthetic data. Although most of them provide rigorous privacy guarantees, theoretical analysis on accuracy is rare. Wasserman & Zhou (2010) established a statistical framework of DP and derived the accuracy of distribution estimated by noisy histograms. Our setting is different from theirs. Precisely, we analyze how noise addition and post-processing affect the conditional distribution (Lemma 6.2). Moreover, our proof handles the non-trivial interaction between the Bayesian network and noise addition. Our lower bound (Theorem 5.1) is related to existing results of the worst case lower bounds under the DP constraint in literature (Hardt & Talwar, 2010; Ullman, 2013; Bassily et al., 2014; Steinke & Ullman, 2017) . Hardt & Talwar (2010) established lower bounds for the accuracy of answering linear queries with privacy budget ϵ. Ullman (2013) derived the worst-case result that in general, it is NP-hard to release private synthetic data which accurately preserves all two-dimensional marginals. Bassily et al. (2014) built on their result and further developed lower bounds for the excess risk for every (ϵ, δ)-DP algorithm. Our result is novel since we consider private synthetic data and the corresponding TV accuracy. Existing results for linear quires are not directly applicable to TV accuracy since they heavily rely on the linear structure.

