ON THE ORIGIN OF IMPLICIT REGULARIZATION IN STOCHASTIC GRADIENT DESCENT

Abstract

For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small.

1. INTRODUCTION

In the limit of vanishing learning rates, stochastic gradient descent with minibatch gradients (SGD) follows the path of gradient flow on the full batch loss function (Yaida, 2019) . However in deep networks, SGD often achieves higher test accuracies when the learning rate is moderately large (LeCun et al., 2012; Keskar et al., 2017) . This generalization benefit is not explained by convergence rate bounds (Ma et al., 2018; Zhang et al., 2019) , because it arises even for large compute budgets for which smaller learning rates often achieve lower training losses (Smith et al., 2020) . Although many authors have studied this phenomenon (Jastrzębski et al., 2018; Smith & Le, 2018; Chaudhari & Soatto, 2018; Shallue et al., 2018; Park et al., 2019; Li et al., 2019; Lewkowycz et al., 2020) , it remains poorly understood, and is an important open question in the theory of deep learning. In a recent work, Barrett & Dherin (2021) analyzed the influence of finite learning rates on the iterates of gradient descent (GD). Their approach is inspired by backward error analysis, a method for the numerical analysis of ordinary differential equation (ODE) solvers (Hairer et al., 2006) . The key insight of backward error analysis is that we can describe the bias introduced when integrating an ODE with finite step sizes by introducing an ancillary modified flow. This modified flow is derived to ensure that discrete iterates of the original ODE lie on the path of the continuous solution to the modified flow. Using this technique, the authors show that if the learning rate is not too large, the discrete iterates of GD lie close to the path of gradient flow on a modified loss C GD (ω) = C(ω) + ( /4)||∇C(ω)|| 2 . This modified loss is composed of the original loss C(ω) and an implicit regularizer proportional to the learning rate which penalizes the euclidean norm of the gradient. However these results only hold for full batch GD, while in practice SGD with small or moderately large batch sizes usually achieves higher test accuracies (Keskar et al., 2017; Smith et al., 2020) . In this work, we devise an alternative approach to backward error analysis, which accounts for the correlations between minibatches during one epoch of training. Using this novel approach, we prove that for small finite learning rates, the mean SGD iterate after one epoch, averaged over all possible sequences of minibatches, lies close to the path of gradient flow on a second modified loss C SGD (ω), which we define in equation 1. This new modified loss is also composed of the full batch loss function and an implicit regularizer, however the structure of the implicit regularizers for GD and SGD differ, and their modified losses can have different local and global minima. Our analysis

