PRIVATE FEDERATED LEARNING WITHOUT A TRUSTED SERVER: OPTIMAL ALGORITHMS FOR CONVEX LOSSES

Abstract

This paper studies federated learning (FL)-especially cross-silo FL-with data from people who do not trust the server or other silos. In this setting, each silo (e.g. hospital) has data from different people (e.g. patients) and must maintain the privacy of each person's data (e.g. medical record), even if the server or other silos act as adversarial eavesdroppers. This requirement motivates the study of Inter-Silo Record-Level Differential Privacy (ISRL-DP), which requires silo i's communications to satisfy record/item-level differential privacy (DP). ISRL-DP ensures that the data of each person (e.g. patient) in silo i (e.g. hospital i) cannot be leaked. ISRL-DP is different from well-studied privacy notions. Central and user-level DP assume that people trust the server/other silos. On the other end of the spectrum, local DP assumes that people do not trust anyone at all (even their own silo). Sitting between central and local DP, ISRL-DP makes the realistic assumption (in cross-silo FL) that people trust their own silo, but not the server or other silos. In this work, we provide tight (up to logarithms) upper and lower bounds for ISRL-DP FL with convex/strongly convex loss functions and homogeneous (i.i.d.) silo data. Remarkably, we show that similar bounds are attainable for smooth losses with arbitrary heterogeneous silo data distributions, via an accelerated ISRL-DP algorithm. We also provide tight upper and lower bounds for ISRL-DP federated empirical risk minimization, and use acceleration to attain the optimal bounds in fewer rounds of communication than the state-of-the-art. Finally, with a secure "shuffler" to anonymize silo messages (but without a trusted server), our algorithm attains the optimal central DP rates under more practical trust assumptions. Numerical experiments show favorable privacy-accuracy tradeoffs for our algorithm in classification and regression tasks.

1. INTRODUCTION

Machine learning tasks often involve data from different "silos" (e.g. cell-phone users or organizations such as hospitals) containing sensitive information (e.g. location or health records). In federated learning (FL), each silo (a.k.a. "client") stores its data locally and a central server coordinates updates among different silos to achieve a global learning objective (Kairouz et al., 2019) . One of the primary reasons for the introduction of FL was to offer greater privacy (McMahan et al., 2017) . However, storing data locally is not sufficient to prevent data leakage. Model parameters or updates can still reveal sensitive information (e.g. via model inversion attacks or membership inference attacks) (Fredrikson et al., 2015; He et al., 2019; Song et al., 2020; Zhu & Han, 2020) . Differential privacy (DP) (Dwork et al., 2006) 2021) considered user-level DP (a.k.a. client-level DP). User-level DP guarantees privacy of each silo's full local data set. This is a practical notion for cross-device FL, where each silo/client corresponds to a single person (e.g. cell-phone user) with many records (e.g. text messages). However, user-level DP still suffers from the second critical shortcoming of CDP: it allows silo data to be leaked to an untrusted server or to other silos. Furthermore, user-level DP is less suitable for cross-silo FL, where silos are typically organizations (e.g. hospitals, banks, or schools) that contain data from many different people (e.g. patients, customers, or students). In cross-silo FL, each person has a record (a.k.a. "item") that may contain sensitive data. Thus, an appropriate notion of DP for cross-silo FL should protect the privacy of each individual record ("item-level DP") in silo i, rather than silo i's full aggregate data. et al., 2011; Duchi et al., 2013) . While central and userlevel DP assume that people trust all of the silos and the server, LDP assumes that individuals (e.g. patients) do not trust anyone else with their sensitive data, not even their own silo (e.g. hospital). Thus, LDP would require each person (e.g. patient) to randomize her report (e.g. medical test results) before releasing it (e.g. to their own doctor/hospital). Since patients/customers/students usually trust their own hospital/bank/school, LDP may be unnecessarily stringent, hindering performance/accuracy. In this work, we consider a privacy notion called intersilo record-level differential privacy (ISRL-DP), which requires that all of the communications of each silo satisfy (item-level) DP; see Fig. Why ISRL-DP? ISRL-DP is the natural notion of DP for cross-silo FL, where each silo contains data from many individuals who trust their own silo but may not trust the server or other silos (e.g., hospitals in Fig. 1 ). The item-level privacy guarantee that ISRL-DP provides for each silo (e.g. hospital) ensures that no person's record can be leaked. In contrast to central DP and user-level DP, the protection of ISRL-DP is guaranteed even against an adversary with access to the server and/or other silos (e.g. hospitals). This is because each silo's communications are DP with respect to their own data records and cannot leak information to any adversarial eavesdropper. On the other hand, since individuals (e.g. patients) trust their own silo (e.g. hospital), ISRL-DP does not require individuals to randomize their own data reports (e.g. health records). Thus, ISRL-DP leads to better performance/accuracy than local DP by relaxing the strict local DP requirement. Another benefit of ISRL-DP is that each silo i can set its own pϵ i , δ i q item-level DP budget depending on its privacy needs; see Appendix H and also Liu et al. (2022); Aldaghri et al. (2021) . In addition, ISRL-DP can be useful in cross-device FL without a trusted server: If the ISRL privacy parameters are chosen sufficiently small, then ISRL-DP implies user-level DP (see Appendix C). Unlike user-level DP, ISRL-DP does not allow data to be leaked to the untrusted server/other users. Another intermediate trust model between the low-trust local model and the high-trust central/userlevel models is the shuffle model of DP (Bittau et al., 2017; Cheu et al., 2019) . In this model, a secure shuffler receives noisy reports from the silos and randomly permutes them before the reports are sent to the untrusted server.foot_1 An algorithm is Shuffle Differentially Private (SDP) if silos' shuffled



We abbreviate central differential privacy by CDP. This is different than the concentrated differential privacy notion inBun & Steinke (2016), for which the same abbreviation is sometimes used in other works. Assume that the reports can be decrypted by the server, but not by the shuffler Erlingsson et al. (2020a); Feldman et al. (2020b).



protects against privacy attacks. Different notions of DP have been proposed for FL. The works of Jayaraman & Wang (2018); Truex et al. (2019); Noble et al. (2022) considered central DP (CDP) FL, which protects the privacy of silos' aggregated data against an external adversary who observes the final trained model. 1 There are two major issues with Published as a conference paper at ICLR 2023 CDP FL: 1) it does not guarantee privacy for each specific silo; and 2) it does not guarantee data privacy when an adversarial eavesdropper has access to other silos or the server. To address the first issue, McMahan et al. (2018); Geyer et al. (2017); Jayaraman & Wang (2018); Gade & Vaidya (2018); Wei et al. (2020a); Zhou & Tang (2020); Levy et al. (2021); Ghazi et al. (

Figure 1: ISRL-DP protects the privacy of each patient's record regardless of whether the server/other silos are trustworthy, as long as the patient's own hospital is trusted. By contrast, user-level DP protects aggregate data of patients in hospital i and does not protect against adversarial server/other silos.

1. By the post-processing property of DP, this also ensures that the the broadcasts by the server and the global model are DP. Privacy notions similar or identical to ISRL-DP have been considered in Truex et al. (2020); Huang et al. (2020); Huang & Gong (2020); Wu et al. (2019); Wei et al. (2020b); Dobbe et al. (2020); Zhao et al. (2020); Arachchige et al. (2019); Seif et al. (2020); Liu et al. (2022). We provide a rigorous definition of ISRL-DP in Definition 2 and Appendix B.

