WEAK AND STRONG GRADIENT DIRECTIONS: EXPLAINING MEMORIZATION, GENERALIZATION, AND HARDNESS OF EXAMPLES AT SCALE

Abstract

Coherent Gradients (CGH) is a recently proposed hypothesis to explain why over-parameterized neural networks trained with gradient descent generalize well even though they have sufficient capacity to memorize the training set. The key insight of CGH is that, since the overall gradient for a single step of SGD is the sum of the per-example gradients, it is strongest in directions that reduce the loss on multiple examples if such directions exist. In this paper, we validate CGH on ResNet, Inception, and VGG models on ImageNet. Since the techniques presented in the original paper do not scale beyond toy models and datasets, we propose new methods. By posing the problem of suppressing weak gradient directions as a problem of robust mean estimation, we develop a coordinate-based median of means approach. We present two versions of this algorithm, M3, which partitions a mini-batch into 3 groups and computes the median, and a more efficient version RM3, which reuses gradients from previous two time steps to compute the median. Since they suppress weak gradient directions without requiring perexample gradients, they can be used to train models at scale. Experimentally, we find that they indeed greatly reduce overfitting (and memorization) and thus provide the first convincing evidence that CGH holds at scale. We also propose a new test of CGH that does not depend on adding noise to training labels or on suppressing weak gradient directions. Using the intuition behind CGH, we posit that the examples learned early in the training process (i.e., "easy" examples) are precisely those that have more in common with other training examples. Therefore, as per CGH, the easy examples should generalize better amongst themselves than the hard examples amongst themselves. We validate this hypothesis with detailed experiments, and believe that it provides further orthogonal evidence for CGH.

1. INTRODUCTION

Generalization in over-parameterized neural networks trained using Stochastic Gradient Descent (SGD) is not well understood. Such networks typically have sufficient capacity to memorize their training set (Zhang et al., 2017) which naturally leads to the question: Among all the maps that are consistent with the training set, why does SGD learn one that generalizes well to the test set? This question has spawned a lot of research in the past few years (Arora et al., 2018; Arpit et al., 2017; Bartlett et al., 2017; Belkin et al., 2019; Fort et al., 2020; Kawaguchi et al., 2017; Neyshabur et al., 2018; Sankararaman et al., 2019; Rahaman et al., 2019; Zhang et al., 2017) . There have been many attempts to extend classical algorithm-independent techniques for reasoning about generalization (e.g., VC-dimension) to incorporate the "implicit bias" of SGD to get tighter bounds (by limiting the size of the hypothesis space to that reachable through SGD). Although this line of work is too large to review here, the recent paper of Nagarajan & Kolter (2019) provides a nice overview. However, they also point out some fundamental problems with this approach (particularly, poor asymptotics), and come to the conclusion that the underlying proof technique itself (uniform convergence) may be inadequate. They argue instead for looking at algorithmic stability (Bousquet & Elisseeff, 2002) . While there has been work on analysing the algorithmic stability of SGD (Hardt et al., 2016; Kuzborskij & Lampert, 2018) , it does not take into account the training data. Since SGD can memorize training data with random labels, and yet generalize on real data (i.e., its generalization behavior is data-dependent (Arpit et al., 2017) ), any such analysis must lead to vacuous bounds in practical settings (Zhang et al., 2017) . Thus, in order for an algorithmic stability based argument to work, what is needed is an approach that takes into account both the algorithmic details of SGD as well as the training data. Recently, a new approach, for understanding generalization along these lines has been proposed in Chatterjee (2020). Called the Coherent Gradients Hypothesis (CGH), the key observation is that descent directions that are common to multiple examples (i.e., similar) add up in the overall gradient (i.e., reinforce each other) whereas directions that are idiosyncratic to particular examples fail to add up. Thus, the biggest changes to the network parameters are those that benefit multiple examples. In other words, certain directions in the tangent space of the loss function are "strong" gradient directions supported by multiple examples whereas other directions are "weak" directions supported by only a few examples. Intuitively-and CGH is only a qualitative theory at this point-strong directions are (algorithmically) stable (in the sense of Bousquet & Elisseeff (2002) , i.e., altered marginally by the removal of a single example) whereas weak directions are (algorithmically) unstable (could disappear entirely if the example supporting it is removed). Therefore, a change to the parameters along a strong direction should generalize better than one along a weak direction. Since the overall gradient is the mean of per-example gradients, if strong directions exist, the overall gradient has large components along it, and thus the parameter updates are biased towards algorithmic stability. Since CGH is a causal explanation for generalization, Chatterjee (2020) tested the theory by performing two causal interventions. Although they found good agreement between the qualitative predictions of the theory and experiments, an important limitation of their work is that their experiments were on shallow (1-3 hidden layers) fully connected networks trained on MNIST using SGD with a fixed learning rate. In this work, we test CGH on large convolutional networks such as ResNet, Inception and VGG on ImageNet. While one of the tests of Chatterjee (2020) (reducing similarity) scales to this setting, the more compelling test (suppressing weak gradients by winsorization) does not. We propose a new class of scalable techniques for suppressing weak gradients, and also propose an entirely new test of CGH which is not based on causal intervention but on analyzing why some examples are learned earlier in training than others. • With increasing noise, since there are fewer pristine examples, the rate at which they are learned should decrease. As preliminary experiment, we ran this test on ImageNet and the results for ResNet-18 are shown in Figure 1 . The results for Inception-V3 and VGG-13 are very similar (please see Appendix A). We note the good agreement with the predictions from CGH thus providing initial evidence that CGH holds at scale. 



PRELIMINARY: REDUCING SIMILARITY ON IMAGENET One test of CGH proposed in Chatterjee (2020) is to study how dataset similarity impacts training. Since directly studying similarity is difficult because which examples are considered similar may change during training (in CGH examples are similar if their gradients are similar), Chatterjee (2020) proposed adding label noise to a dataset based on the intuition is that no matter what the notion of similarity, adding label noise is likely to decrease it. Therefore, if CGH is true, we should expect that: • As the label noise increases, the rate at which examples are learned decreases, • Examples whose labels have not been corrupted (pristine examples) should be learned faster than the rest (corrupt examples), and,

3 ABLATING SGD TO TEST THE COHERENT GRADIENT HYPOTHESIS: SCALABLE TECHNIQUES TO SUPPRESS WEAK GRADIENT DIRECTIONS Since weak directions are supported by few examples, CGH holds that overfitting and memorization in SGD is caused by descending down weak directions. The original CGH paper proposed to test CGH by modifying SGD to suppressing weak directions in order to verify that it significantly reduces overfitting (Chatterjee, 2020), i.e., improves generalization through greater algorithmic stability (Bousquet & Elisseeff, 2002).

