DIFFERENTIALLY PRIVATE SYNTHETIC DATA: APPLIED EVALUATIONS AND ENHANCEMENTS

Abstract

Machine learning practitioners frequently seek to leverage the most informative available data, without violating the data owner's privacy, when building predictive models. Differentially private data synthesis protects personal details from exposure, and allows for the training of differentially private machine learning models on privately generated datasets. But how can we effectively assess the efficacy of differentially private synthetic data? In this paper, we survey four differentially private generative adversarial networks for data synthesis. We evaluate each of them at scale on five standard tabular datasets, and in two applied industry scenarios. We benchmark with novel metrics from recent literature and other standard machine learning tools. Our results suggest some synthesizers are more applicable for different privacy budgets, and we further demonstrate complicating domain-based tradeoffs in selecting an approach. We offer experimental learning on applied machine learning scenarios with private internal data to researchers and practitioners alike. In addition, we propose QUAIL, a two model hybrid approach to generating synthetic data. We examine QUAIL's tradeoffs, and note circumstances in which it outperforms baseline differentially private supervised learning models under the same budget constraint.

1. INTRODUCTION

Maintaining an individual's privacy is a major concern when collecting sensitive information from groups or organizations. A formalization of privacy, known as differential privacy, has become the gold standard with which to protect information from malicious agents (Dwork, TAMC 2008) . Differential privacy offers some of the most stringent known theoretical privacy guarantees (Dwork et al., 2014) . Intuitively, for some query on some dataset, a differentially private algorithm produces an output, regulated by a privacy parameter , that is statistically indistinguishable from the same query on the same dataset had any one individual's information been removed. This powerful tool has been adopted by researchers and industry leaders, and has become particularly interesting to machine learning practitioners, who hope to leverage privatized data in training predictive models (Ji et al., 2014; Vietri et al., 2020) . Because differential privacy often depends on adding noise, the results of differentially private algorithms can come at the cost of data accuracy and utility. However, differentially private machine learning algorithms have shown promise across a number of domains. These algorithms can provide tight privacy guarantees while still producing accurate predictions (Abadi et al., 2016) . A drawback to most methods, however, is in the one-off nature of training: once the model is produced, the privacy budget for a real dataset can be entirely consumed. The differentially private model is therefore inflexible to retraining and difficult to share/verify: the output model is a black box. This can be especially disadvantageous in the presence of high dimensional data that require rigorous training techniques like dimensionality reduction or feature selection (Hay et al., 2016) . With limited budget to spend, data scientists cannot exercise free range over a dataset, thus sacrificing model quality. In an effort to remedy this, and other challenges faced by traditional differentially private methods for querying, we can use differentially private techniques for synthetic data generation, investigate the privatized data, and train informed supervised learning models. In order to use the many state-of-the-art methods for differentially private synthetic data effectively in industry domains, we must first address pitfalls in practical analysis, such as the lack of realistic benchmarking (Arnold & Neunhoeffer, 2020) . Benchmarking is non-trivial, as many new stateof-the-art differentially private synthetic data algorithms leverage generative adversarial networks (GANs), making them expensive to evaluate on large scale datasets (Zhao et al., 2019) . Furthermore, many of state-of-the-art approaches lack direct comparisons to one another, and by nature of the privatization mechanisms, interpreting experimental results is non-trivial (Jayaraman & Evans, 2019) . New metrics presented to analyze differentially private synthetic data methods may themselves need more work to understand, especially in the domain of tabular data (Ruggles et al., 2019; Machanavajjhala et al., 2017) . To that end, our contributions in this paper are 3-fold. (1) We introduce more realisitic benchmarking. Practitioners commonly collect state-of-the-art approaches for comparison in a shared environment (Xu et al., 2019) . We provide our evaluation framework, with extensive comparisons on both standard datasets and our real-world, industry applications. (2) We provide experimentation on novel metrics at scale. We stress the tradeoff between synthetic data utility and statistical similarity, and offer guidelines for untried data. (3) We present a straightforward and pragmatic enhancement, QUAIL, that addresses the tradeoff between utility and statistical similarity. QUAIL's simple modification to a differentially private data synthesis architecture boosts synthetic data utility in machine learning scenarios without harming summary statistics or privacy guarantees.

2. BACKGROUND

Differential Privacy (DP) is a formal definition of privacy offering strong assurances against various re-identification and re-construction attacks (Dwork et al., 2006; 2014) . In the last decade, DP has attracted significant attention due to its provable privacy guarantees and ability to quantify privacy loss, as well as unique properties such as robustness to auxiliary information, composability enabling modular design, and group privacy (Dwork et al., 2014; Abadi et al., 2016) Definition 1. (Differential Privacy Dwork et al. ( 2006)) A randomized function K provides ( , δ)differential privacy if ∀S ⊆ Range(K), all neighboring datasets D, D differing on a single entry, Pr[K(D) ∈ S] ≤ e • Pr[K( D) ∈ S] + δ, This is a standard definition of DP, implying that the outputs of differentially private algorithm for datasets that vary by a single individual are indistinguishable, bounded by the privacy parameter . Here, is a non-negative number otherwise known as the privacy budget. Smaller values more rigorously enforce privacy, but often decrease data utility. An important property of DP is its resistance to post-processing. Given an ( , δ)-differentially private algorithm K : D → O, and f : O → O´an arbitrary randomized mapping, f • K : D → O´is also differentially private. Currently, the widespread accessibility of data has increased data protection and privacy regulations, leading to a surge of research into applied scenarios for differential privacy (Allen et al. ( 2019 2016) provided PATE, which functions by first deploying multiple teacher models that are trained on disjoint datasets, then deploying the teacher models on unseen data to make predictions. On unseen data, the teacher models "vote" to determine the label; here random noise is introduced to privatize the results of the vote. The random noise is generated following the Laplace Lap(λ) distribution. PATE further introduces student models, which try to train a model, but only have access to the privatized labels garnered from the teacher's vote. By training multiple teachers on disjoint datasets and adding noise



); Ding et al. (2017); Doudalis et al. (2017). There have been several studies into protecting individual's privacy during model training Li et al. (2014); Zhang et al. (2015); Feldman et al. (2018). In particular, several studies have attempted to solve the problem of preserving privacy in deep learning (Phan et al. (2017); Abadi et al. (2016); Shokri & Shmatikov (2015); Xie et al. (2018); Zhang et al. (2018); Jordon et al. (2018b); Torkzadehmahani et al. (2019)). Here, two main techniques for training models with differential privacy are discussed: DP-SGD Differentially Private Stochastic Gradient Descent (DP-SGD), proposed by Abadi et al.(2016), is one of the first studies to make the Stochastic Gradient Descent (SGD) computation differential private. Intuitively, DPSGD minimizes its loss function while preserving differential privacy by clipping the gradient in the optimization's l 2 norm to reduce the model's sensitivity, and adding noise to protect privacy. Further details can be found in the Appendix. PATE Private Aggregation of Teacher Ensembles (PATE) Papernot et al. (

