RECYCLING SCRAPS: IMPROVING PRIVATE LEARNING BY LEVERAGING INTERMEDIATE CHECKPOINTS

Abstract

All state-of-the-art (SOTA) differentially private machine learning (DP ML) methods are iterative in nature, and their privacy analyses allow publicly releasing the intermediate training checkpoints. However, DP ML benchmarks, and even practical deployments, typically use only the final training checkpoint to make predictions. In this work, for the first time, we comprehensively explore various methods that aggregate intermediate checkpoints to improve the utility of DP training. Empirically, we demonstrate that checkpoint aggregations provide significant gains in the prediction accuracy over the existing SOTA for CIFAR10 and StackOverflow datasets, and that these gains get magnified in settings with periodically varying training data distributions. For instance, we improve SOTA StackOverflow accuracies to 22.7% (+0.43% absolute) for ε = 8.2, and 23.84% (+0.43%) for ε = 18.9. Theoretically, we show that uniform tail averaging of checkpoints improves the empirical risk minimization bound compared to the last checkpoint of DP-SGD. Lastly, we initiate an exploration into estimating the uncertainty that DP noise adds in the predictions of DP ML models. We prove that, under standard assumptions on the loss function, the sample variance from last few checkpoints provides a good approximation of the variance of the final model of a DP run. Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model.

1. INTRODUCTION

Machine learning models can unintentionally memorize sensitive information about the data they were trained on, which has led to numerous attacks that extract private information about the training data (Ateniese et al., 2013; Fredrikson et al., 2014; 2015; Carlini et al., 2019; Shejwalkar et al., 2021; Carlini et al., 2021; 2022) . For instance, membership inference attacks (Shokri et al., 2017) can infer whether a target sample was used to train a given ML model, while property inference attacks (Melis et al., 2019; Mahloujifar et al., 2022) can infer certain sensitive properties of the training data. To address such privacy risks, literature has introduced various approaches to privacy-preserving ML (Nasr et al., 2018; Shejwalkar & Houmansadr, 2021; Tang et al., 2022) . In particular, iterative techniques like differentially private stochastic gradient decent (DP-SGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016b; McMahan et al., 2017) and DP Follow The Regularized Leader (DP-FTRL) (Kairouz et al., 2021) have become the state-of-the-art for training DP neural networks. For establishing benchmarks, prior works in DP ML (Abadi et al., 2016b; McMahan et al., 2017; 2018; Thakkar et al., 2019; Erlingsson et al., 2019; Wang et al., 2019b; Zhu & Wang, 2019; Balle et al., 2020; Erlingsson et al., 2020; Papernot et al., 2020; Tramer & Boneh, 2020; Andrew et al., 2021; Kairouz et al., 2021; Amid et al., 2022; De et al., 2022; Feldman et al., 2022) use only the final model output by the DP algorithm. This is also how DP models are deployed in practice (Ramaswamy et al., 2020; McMahan et al., 2022) . However, the privacy analyses for the techniques used allow releasing/using all of the intermediate training checkpoints. In this work, we comprehensively study various methods that leverage intermediate checkpoints to 1) improve the utility of DP training, and 2) quantify the uncertainty in DP ML models that is due to the DP noise.

Accuracy improvement using checkpoints:

We propose two classes of aggregation methods based on aggregating the parameters of checkpoints, or their outputs. We provide both theoretical and em-

