WHY (AND WHEN) DOES LOCAL SGD GENERALIZE BETTER THAN SGD?

Abstract

Local SGD is a communication-efficient variant of SGD for large-scale training, where multiple GPUs perform SGD independently and average the model parameters periodically. It has been recently observed that Local SGD can not only achieve the design goal of reducing the communication overhead but also lead to higher test accuracy than the corresponding SGD baseline (Lin et al., 2020b), though the training regimes for this to happen are still in debate (Ortiz et al., 2021). This paper aims to understand why (and when) Local SGD generalizes better based on Stochastic Differential Equation (SDE) approximation. The main contributions of this paper include (i) the derivation of an SDE that captures the long-term behavior of Local SGD in the small learning rate regime, showing how noise drives the iterate to drift and diffuse after it has reached close to the manifold of local minima, (ii) a comparison between the SDEs of Local SGD and SGD, showing that Local SGD induces a stronger drift term that can result in a stronger effect of regularization, e.g., a faster reduction of sharpness, and (iii) empirical evidence validating that having a small learning rate and long enough training time enables the generalization improvement over SGD but removing either of the two conditions leads to no improvement.

1. INTRODUCTION

As deep models have grown larger, training them with reasonable wall-clock times has led to new distributed environments and new variants of gradient-based training. Recall that Stochastic Gradient Descent (SGD) tries to solve min θ∈R d E ξ∼ D [ℓ(θ; ξ)], where θ ∈ R d is the parameter vector of the model, ℓ(θ; ξ) is the loss function for a data sample ξ drawn from the training distribution D, e.g., the uniform distribution over the training set. SGD with learning rate η and batch size B does the following update at each step, using a batch of B independent ξ t,1 , . . . , ξ t,B ∼ D: θ t+1 ← θ t -ηg t , where g t = 1 B B i=1 ∇ℓ(θ t ; ξ t,i ). (1) Parallel SGD tries to improve wall-clock time when the batch size B is large enough. It distributes the gradient computation to K ≥ 2 workers, each of whom focuses on a local batch of B loc := B/K samples and computes the average gradient over the local batch. Finally, g t is obtained by averaging the local gradients over the K workers. 

