WEAK AND STRONG GRADIENT DIRECTIONS: EXPLAINING MEMORIZATION, GENERALIZATION, AND HARDNESS OF EXAMPLES AT SCALE

Abstract

Coherent Gradients (CGH) is a recently proposed hypothesis to explain why over-parameterized neural networks trained with gradient descent generalize well even though they have sufficient capacity to memorize the training set. The key insight of CGH is that, since the overall gradient for a single step of SGD is the sum of the per-example gradients, it is strongest in directions that reduce the loss on multiple examples if such directions exist. In this paper, we validate CGH on ResNet, Inception, and VGG models on ImageNet. Since the techniques presented in the original paper do not scale beyond toy models and datasets, we propose new methods. By posing the problem of suppressing weak gradient directions as a problem of robust mean estimation, we develop a coordinate-based median of means approach. We present two versions of this algorithm, M3, which partitions a mini-batch into 3 groups and computes the median, and a more efficient version RM3, which reuses gradients from previous two time steps to compute the median. Since they suppress weak gradient directions without requiring perexample gradients, they can be used to train models at scale. Experimentally, we find that they indeed greatly reduce overfitting (and memorization) and thus provide the first convincing evidence that CGH holds at scale. We also propose a new test of CGH that does not depend on adding noise to training labels or on suppressing weak gradient directions. Using the intuition behind CGH, we posit that the examples learned early in the training process (i.e., "easy" examples) are precisely those that have more in common with other training examples. Therefore, as per CGH, the easy examples should generalize better amongst themselves than the hard examples amongst themselves. We validate this hypothesis with detailed experiments, and believe that it provides further orthogonal evidence for CGH.

1. INTRODUCTION

Generalization in over-parameterized neural networks trained using Stochastic Gradient Descent (SGD) is not well understood. Such networks typically have sufficient capacity to memorize their training set (Zhang et al., 2017) which naturally leads to the question: Among all the maps that are consistent with the training set, why does SGD learn one that generalizes well to the test set? This question has spawned a lot of research in the past few years (Arora et al., 2018; Arpit et al., 2017; Bartlett et al., 2017; Belkin et al., 2019; Fort et al., 2020; Kawaguchi et al., 2017; Neyshabur et al., 2018; Sankararaman et al., 2019; Rahaman et al., 2019; Zhang et al., 2017) . There have been many attempts to extend classical algorithm-independent techniques for reasoning about generalization (e.g., VC-dimension) to incorporate the "implicit bias" of SGD to get tighter bounds (by limiting the size of the hypothesis space to that reachable through SGD). Although this line of work is too large to review here, the recent paper of Nagarajan & Kolter (2019) provides a nice overview. However, they also point out some fundamental problems with this approach (particularly, poor asymptotics), and come to the conclusion that the underlying proof technique itself (uniform convergence) may be inadequate. They argue instead for looking at algorithmic stability (Bousquet & Elisseeff, 2002) . While there has been work on analysing the algorithmic stability of SGD (Hardt et al., 2016; Kuzborskij & Lampert, 2018) , it does not take into account the training data. Since SGD can memorize training data with random labels, and yet generalize on real data (i.e., its generalization behavior is data-dependent (Arpit 1

