EPISODE: EPISODIC GRADIENT CLIPPING WITH PE-RIODIC RESAMPLED CORRECTIONS FOR FEDERATED LEARNING WITH HETEROGENEOUS DATA

Abstract

Gradient clipping is an important technique for deep neural networks with exploding gradients, such as recurrent neural networks. Recent studies have shown that the loss functions of these networks do not satisfy the conventional smoothness condition, but instead satisfy a relaxed smoothness condition, i.e., the Lipschitz constant of the gradient scales linearly in terms of the gradient norm. Due to this observation, several gradient clipping algorithms have been developed for nonconvex and relaxed-smooth functions. However, the existing algorithms only apply to the single-machine or multiple-machine setting with homogeneous data across machines. It remains unclear how to design provably efficient gradient clipping algorithms in the general Federated Learning (FL) setting with heterogeneous data and limited communication rounds. In this paper, we design EPISODE, the very first algorithm to solve FL problems with heterogeneous data in the nonconvex and relaxed smoothness setting. The key ingredients of the algorithm are two new techniques called episodic gradient clipping and periodic resampled corrections. At the beginning of each round, EPISODE resamples stochastic gradients from each client and obtains the global averaged gradient, which is used to (1) determine whether to apply gradient clipping for the entire round and (2) construct local gradient corrections for each client. Notably, our algorithm and analysis provide a unified framework for both homogeneous and heterogeneous data under any noise level of the stochastic gradient, and it achieves state-of-the-art complexity results. In particular, we prove that EPISODE can achieve linear speedup in the number of machines, and it requires significantly fewer communication rounds. Experiments on several heterogeneous datasets, including text classification and image classification, show the superior performance of EPISODE over several strong baselines in FL. The code is available at https://github.com/MingruiLiu-ML-Lab/episode.

1. INTRODUCTION

Gradient clipping (Pascanu et al., 2012; 2013) is a well-known strategy to improve the training of deep neural networks with the exploding gradient issue such as Recurrent Neural Networks (RNN) (Rumelhart et al., 1986; Elman, 1990; Werbos, 1988) and Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) . Although it is a widely-used strategy, formally analyzing gradient clipping in deep neural networks under the framework of nonconvex optimization only happened recently (Zhang et al., 2019a; 2020a; Cutkosky & Mehta, 2021; Liu et al., 2022) . In particular, Zhang et al. (2019a) showed empirically that the gradient Lipschitz constant scales linearly in terms of the gradient norm when training certain neural networks such as AWD-LSTM (Merity et al., 2018) , introduced the relaxed smoothness condition (i.e., (L 0 , L 1 )-smoothness 1 ), and proved that clipped gradient descent converges faster than any fixed step size gradient descent. Later on, Zhang et al. (2020a) provided tighter complexity bounds of the gradient clipping algorithm. Federated Learning (FL) (McMahan et al., 2017a) is an important distributed learning paradigm in which a single model is trained collaboratively under the coordination of a central server without revealing client data 2 . FL has two critical features: heterogeneous data and limited communication. Table 1 : Communication complexity (R) and largest number of skipped communication (I max ) to guarantee linear speedup for different methods to find an -stationary point (defined in Definition 1). "Single" means single machine, N is the number of clients, I is the number of skipped communications, κ is the quantity representing the heterogeneity, ∆ = f (x 0 ) -min x f (x), and σ 2 is the variance of stochastic gradients. Iteration complexity (T ) is the product of communication complexity and the number of skipped communications (i.e., T = RI ). Best iteration complexity T min denotes the minimum value of T the algorithm can achieve through adjusting I. Linear speedup means the iteration complexity is divided by N compared with the single machine baseline: in our case it means T = O( ∆L0σ 2 N 4 ) iteration complexity.  O ∆Lσ 2 N I 4 + ∆Lκ 2 N I σ 2 2 + ∆LN 2 O( ∆L0σ 2 N 4 ) O σ 2 κN SCAFFOLD (Karimireddy et al., 2020) Heterogeneous, L-smooth O ∆Lσ 2 N I 4 + ∆L 2 O( ∆L0σ 2 N 4 ) O σ 2 N 2 Clipped SGD (Zhang et al., 2019b) Single, (L0, L1)-smooth O (∆+(L0+L1σ)σ 2 +σL 2 0 /L1) 2 4 O (∆+(L0+L1σ)σ 2 +σL 2 0 /L1) 2 4 N/A Clipping Framework (Zhang et al., 2020a) Single, (L0, L1)-smooth O ∆L0σ 2 4 O ∆L0σ 2 4 N/A CELGC (Liu et al., 2022) Homogeneous, (L0, L1)-smooth O ∆L0σ 2 N I 4 O( ∆L0σ 2 N 4 ) O σ N EPISODE (this work) Heterogeneous, (L0, L1)-smooth O ∆L0σ 2 N I 4 + ∆(L0+L1(κ+σ)) 2 1 + σ O( ∆L0σ 2 N 4 ) O L0σ 2 (L0+L1(κ+σ))(1+ σ )N 2 Although there is a vast literature on FL (see (Kairouz et al., 2019) and references therein), the theoretical and algorithmic understanding of gradient clipping algorithms for training deep neural networks in the FL setting remains nascent. To the best of our knowledge, Liu et al. ( 2022) is the only work that has considered a communication-efficient distributed gradient clipping algorithm under the nonconvex and relaxed smoothness conditions in the FL setting. In particular, Liu et al. ( 2022) proved that their algorithm achieves linear speedup in terms of the number of clients and reduced communication rounds. Nevertheless, their algorithm and analysis are only applicable to the case of homogeneous data. In addition, the analyses of the stochastic gradient clipping algorithms in both single machine (Zhang et al., 2020a) and multiple-machine setting (Liu et al., 2022) require strong distributional assumptions on the stochastic gradient noisefoot_0 , which may not hold in practice. In this work, we introduce a provably computation and communication efficient gradient clipping algorithm for nonconvex and relaxed-smooth functions in the general FL setting (i.e., heterogeneous data, limited communication) and without any distributional assumptions on the stochastic gradient noise. Compared with previous work on gradient clipping (Zhang et al., 2019a; 2020a; Cutkosky & Mehta, 2020; Liu et al., 2022) and FL with heterogeneous data (Li et al., 2020a; Karimireddy et al., 2020) , our algorithm design relies on two novel techniques: episodic gradient clipping and periodic resampled corrections. In a nutshell, at the beginning of each communication round, the algorithm resamples each client's stochastic gradient; this information is used to decide whether to apply clipping in the current round (i.e., episodic gradient clipping), and to perform local corrections to each client's update (i.e., periodic resampled corrections). These techniques are very different compared with previous work on gradient clipping. Specifically, (1) In traditional gradient clipping (Pascanu et al., 2012; Zhang et al., 2019a; 2020a; Liu et al., 2022) , whether or not to apply the clipping operation is determined only by the norm of the client's current stochastic gradient. Instead, we use the norm of the global objective's stochastic gradient (resampled at the beginning of the round) to determine whether or not clipping will be applied throughout the entire communication round. (2) Different from Karimireddy et al. ( 2020) which uses historical gradient information from the previous round to perform corrections, our algorithm utilizes the resampled gradient to correct each client's local update towards the global gradient, which mitigates the effect of data heterogeneity. Notice that, under the relaxed smoothness setting, the gradient may change quickly around a point at which the gradient norm is large. Therefore, our algorithm treats a small gradient as more "reliable" and confidently applies the unclipped corrected local updates; on the contrary, the algorithm regards a large gradient as less "reliable" and in this case clips the corrected local updates. Our major contributions are summarized as follows.



Zhang et al. (2020a) requires an explicit lower bound for the stochastic gradient noise, and Liu et al. (2022) requires the distribution of the stochastic gradient noise is unimodal and symmetric around its mean.

