PRIVATE FEDERATED LEARNING WITHOUT A TRUSTED SERVER: OPTIMAL ALGORITHMS FOR CONVEX LOSSES

Abstract

This paper studies federated learning (FL)-especially cross-silo FL-with data from people who do not trust the server or other silos. In this setting, each silo (e.g. hospital) has data from different people (e.g. patients) and must maintain the privacy of each person's data (e.g. medical record), even if the server or other silos act as adversarial eavesdroppers. This requirement motivates the study of Inter-Silo Record-Level Differential Privacy (ISRL-DP), which requires silo i's communications to satisfy record/item-level differential privacy (DP). ISRL-DP ensures that the data of each person (e.g. patient) in silo i (e.g. hospital i) cannot be leaked. ISRL-DP is different from well-studied privacy notions. Central and user-level DP assume that people trust the server/other silos. On the other end of the spectrum, local DP assumes that people do not trust anyone at all (even their own silo). Sitting between central and local DP, ISRL-DP makes the realistic assumption (in cross-silo FL) that people trust their own silo, but not the server or other silos. In this work, we provide tight (up to logarithms) upper and lower bounds for ISRL-DP FL with convex/strongly convex loss functions and homogeneous (i.i.d.) silo data. Remarkably, we show that similar bounds are attainable for smooth losses with arbitrary heterogeneous silo data distributions, via an accelerated ISRL-DP algorithm. We also provide tight upper and lower bounds for ISRL-DP federated empirical risk minimization, and use acceleration to attain the optimal bounds in fewer rounds of communication than the state-of-the-art. Finally, with a secure "shuffler" to anonymize silo messages (but without a trusted server), our algorithm attains the optimal central DP rates under more practical trust assumptions. Numerical experiments show favorable privacy-accuracy tradeoffs for our algorithm in classification and regression tasks.

1. INTRODUCTION

Machine learning tasks often involve data from different "silos" (e.g. cell-phone users or organizations such as hospitals) containing sensitive information (e.g. location or health records). In federated learning (FL), each silo (a.k.a. "client") stores its data locally and a central server coordinates updates among different silos to achieve a global learning objective (Kairouz et al., 2019) . One of the primary reasons for the introduction of FL was to offer greater privacy (McMahan et al., 2017) . However, storing data locally is not sufficient to prevent data leakage. Model parameters or updates can still reveal sensitive information (e.g. via model inversion attacks or membership inference attacks) (Fredrikson et al., 2015; He et al., 2019; Song et al., 2020; Zhu & Han, 2020) . Differential privacy (DP) (Dwork et al., 2006) 



We abbreviate central differential privacy by CDP. This is different than the concentrated differential privacy notion inBun & Steinke (2016), for which the same abbreviation is sometimes used in other works.



protects against privacy attacks. Different notions of DP have been proposed for FL. The works of Jayaraman & Wang (2018); Truex et al. (2019); Noble et al. (2022) considered central DP (CDP) FL, which protects the privacy of silos' aggregated data against an external adversary who observes the final trained model. 1 There are two major issues with

