FEDERATED LEARNING'S BLESSING: FEDAVG HAS LINEAR SPEEDUP

Abstract

Federated learning (FL) learns a model jointly from a set of participating devices without sharing each other's privately held data. The characteristics of non-i.i.d. data across the network, low device participation, high communication costs, and the mandate that data remain private bring challenges in understanding the convergence of FL algorithms, particularly in regards to how convergence scales with the number of participating devices. In this paper, we focus on Federated Averaging (FedAvg)-arguably the most popular and effective FL algorithm class in use today-and provide a unified and comprehensive study of its convergence rate. Although FedAvg has recently been studied by an emerging line of literature, it remains open as to how FedAvg's convergence scales with the number of participating devices in the fully heterogeneous FL setting-a crucial question whose answer would shed light on the performance of FedAvg in large FL systems. We fill this gap by providing a unified analysis that establishes convergence guarantees for FedAvg under three classes of problems: strongly convex smooth, convex smooth, and overparameterized strongly convex smooth problems. We show that FedAvg enjoys linear speedup in each case, although with different convergence rates and communication efficiencies. While there have been linear speedup results from distributed optimization that assumes full participation, ours are the first to establish linear speedup for FedAvg under both statistical and system heterogeneity. For strongly convex and convex problems, we also characterize the corresponding convergence rates for the Nesterov accelerated FedAvg algorithm, which are the first linear speedup guarantees for momentum variants of FedAvg in the convex setting. To provably accelerate FedAvg, we design a new momentum-based FL algorithm that further improves the convergence rate in overparameterized linear regression problems. Empirical studies of the algorithms in various settings have supported our theoretical results.

1. INTRODUCTION

Federated learning (FL) is a machine learning paradigm where many clients (e.g., mobile devices or organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized (Smith et al. ( 2017 2018)), to name a few-for at least three reasons: First, the rapid proliferation of smart devices that are equipped with both computing power and data-capturing capabilities provided the infrastructure core for FL. Second, the rising awareness of privacy and the explosive growth of computational power in mobile devices have made it increasingly attractive to push the computation to the edge. Third, the empirical success of communication-efficient FL algorithms has enabled increasingly larger-scale parallel computing and learning with less communication overhead. Despite its promise and broad applicability in our current era, the potential value FL delivers is coupled with the unique challenges it brings forth. In particular, when FL learns a single statistical model using data from across all the devices while keeping each individual device's data isolated (Kairouz et al. ( 2019 



); Kairouz et al. (2019)). In recent years, FL has swiftly emerged as an important learning paradigm (McMahan et al. (2017); Li et al. (2020a))-one that enjoys widespread success in applications such as personalized recommendation (Chen et al. (2018)), virtual assistant (Lam et al. (2019)), and keyboard prediction (Hard et al. (

)), it faces two challenges that are absent in centralized optimization and distributed (stochastic) optimization (Zhou & Cong (2018); Stich (2019); Khaled et al. (2019); Liang et al. (2019); Wang & Joshi (2018); Woodworth et al. (2018); Wang et al. (2019); Jiang & Agrawal (2018); Yu et al. (2019b;a); Khaled et al. (2020); Koloskova et al. (2020); Woodworth et al. (2020b;a)):1) Data (statistical) heterogeneity: data distributions in devices are different (and data cannot be shared);

