DISENTANGLING THE MECHANISMS BEHIND IMPLICIT REGULARIZATION IN SGD

Abstract

A number of competing hypotheses have been proposed to explain why smallbatch Stochastic Gradient Descent (SGD) leads to improved generalization over the full-batch regime, with recent work crediting the implicit regularization of various quantities throughout training. However, to date, empirical evidence assessing the explanatory power of these hypotheses is lacking. In this paper, we conduct an extensive empirical evaluation, focusing on the ability of various theorized mechanisms to close the small-to-large batch generalization gap. Additionally, we characterize how the quantities that SGD has been claimed to (implicitly) regularize change over the course of training. By using micro-batches, i.e. disjoint smaller subsets of each mini-batch, we empirically show that explicitly penalizing the gradient norm or the Fisher Information Matrix trace, averaged over micro-batches, in the large-batch regime recovers small-batch SGD generalization, whereas Jacobian-based regularizations fail to do so. This generalization performance is shown to often be correlated with how well the regularized model's gradient norms resemble those of small-batch SGD. We additionally show that this behavior breaks down as the micro-batch size approaches the batch size. Finally, we note that in this line of inquiry, positive experimental findings on CI-FAR10 are often reversed on other datasets like CIFAR100, highlighting the need to test hypotheses on a wider collection of datasets.

1. INTRODUCTION

While small-batch SGD has frequently been observed to outperform large-batch SGD (Geiping et al., 2022; Keskar et al., 2017; Masters and Luschi, 2018; Smith et al., 2021; Wu et al., 2020; Jastrzebski et al., 2018; Wu et al., 2018; Wen et al., 2020; Mori and Ueda, 2020) , the upstream cause for this generalization gap is a contested topic, approached from a variety of analytical perspectives (Goyal et al., 2017; Wu et al., 2020; Geiping et al., 2022; Lee et al., 2022) . Initial work in this field has generally focused on the learning rate to batch-size ratio (Keskar et al., 2017; Masters and Luschi, 2018; Goyal et al., 2017; Mandt et al., 2017; He et al., 2019; Li et al., 2019) or on recreating stochastic noise via mini-batching (Wu et al., 2020; Jastrzebski et al., 2018; Zhu et al., 2019; Mori and Ueda, 2020; Cheng et al., 2020; Simsekli et al., 2019; Xie et al., 2021) , whereas recent works have pivoted focus on understanding how mini-batch SGD may implicitly regularize certain quantities that improve generalization (Geiping et al., 2022; Barrett and Dherin, 2020; Smith et al., 2021; Lee et al., 2022; Jastrzebski et al., 2020) . In this paper, we provide a careful empirical analysis of how these competing regularization theories compare to each other as assessed by how well the prescribed interventions, when applied in the large batch setting, recover SGD's performance. Additionally, we study their similarities and differences by analyzing the evolution of the regularized quantities over the course of training. Our main contributions are the following: 1. By utilizing micro-batches (i.e. disjoint subsets of each mini-batch), we find that explicitly regularizing either the average micro-batch gradient norm (Geiping et al., 2022; Barrett and Dherin, 2020) or Fisher Information Matrix trace (Jastrzebski et al., 2020) (equivalent to the average gradient norm when labels are drawn from the predictive distribution, detailed in Section 2.2) in the large-batch regime fully recovers small-batch SGD generalization performance, but using Jacobian-based regularization (Lee et al., 2022) fails to recover small-batch SGD performance (see Figure 1 ). 2. We show that the generalization performance is strongly correlated with how well the trajectory of the average micro-batch gradient norm during training mimics that of small-batch SGD, but that this condition is not necessary for recovering performance in some scenarios. The poor performance of Jacobian regularization, which enforces either uniform or fully random weighting on each class and example (see Section 2.3), highlights that the beneficial aspects of average micro-batch gradient norm or Fisher trace regularization may come from the loss gradient's ability to adaptively weight outputs on the per example and per class basis. 3. We demonstrate that the generalization benefits of both successful methods no longer hold when the micro-batch size is closer to the actual batch size. We too subsequently show that in this regime the average micro-batch gradient norm behavior of both previously successful methods differs significantly from the small-batch SGD case. 4. We highlight a high-level issue in modern empirical deep learning research: Experimental results that hold on CIFAR10 do not necessarily carry over to other datasets. In particular, we focus on a technique called gradient grafting (Agarwal et al., 2020) , which has been shown to improve generalization for adaptive gradient methods. By looking at its behavior for normal SGD and GD, we show that gradient grafting recovers small-batch SGD generalization's performance on CIFAR10 but fails in CIFAR100, arguing that research in this line should prioritize experiments on a larger and diverse range of benchmark datasets.

2. PRIOR WORK AND PRELIMINARIES

In neural network training, the choice of batch size (and learning rate) heavily influence generalization. In particular, researchers have found that opting for small batch sizes (and large learning rates) improve a network's ability to generalize (Keskar et al., 2017; Masters and Luschi, 2018; Goyal et al., 2017; Mandt et al., 2017; He et al., 2019; Li et al., 2019 ) . Yet, explanations for this phenomenon have long been debated. While some researchers have attributed the success of small-batch SGD to gradient noise introduced by stochasticity and mini-batching (Wu et al., 2020; Jastrzebski et al., 2018; Zhu et al., 2019; Mori and Ueda, 2020; Cheng et al., 2020; Simsekli et al., 2019; Xie et al., 2021) , others posit that small-batch SGD finds "flat minima" with low non-uniformity, which in turn boosts generalization (Keskar et al., 2017; Wu et al., 2018; Simsekli et al., 2019) . Mean-



Figure 1: Validation Accuracy and Average Micro-batch (|M | = 128) Gradient Norm for CI-FAR10/100 Regularization Experiments, averaged across runs (plots also smoothed for clarity). In both datasets, Gradient Norm (GN) and Fisher Trace (FT) Regularization mimic the average microbatch gradient norm behavior of SGD during early training and effectively recover generalization performance (within a small margin of error), whereas both Average and Unit Jacobian (AJ and UJ) fail to do so.

