UNMASKING THE LOTTERY TICKET HYPOTHESIS: WHAT'S ENCODED IN A WINNING TICKET'S MASK?

Abstract

As neural networks get larger and costlier, it is important to find sparse networks that require less compute and memory but can be trained to the same accuracy as the full network (i.e. matching). Iterative magnitude pruning (IMP) is a state of the art algorithm that can find such highly sparse matching subnetworks, known as winning tickets. IMP iterates through cycles of training, pruning a fraction of smallest magnitude weights, rewinding unpruned weights back to an early training point, and repeating. Despite its simplicity, the principles underlying when and how IMP finds winning tickets remain elusive. In particular, what useful information does an IMP mask found at the end of training convey to a rewound network near the beginning of training? How does SGD allow the network to extract this information? And why is iterative pruning needed, i.e. why can't we prune to very high sparsities in one shot? We investigate these questions through the lens of the geometry of the error landscape. First, we find that-at higher sparsities-pairs of pruned networks at successive pruning iterations are connected by a linear path with zero error barrier if and only if they are matching. This indicates that masks found at the end of training convey to the rewind point the identity of an axial subspace that intersects a desired linearly connected mode of a matching sublevel set. Second, we show SGD can exploit this information due to a strong form of robustness: it can return to this mode despite strong perturbations early in training. Third, we show how the flatness of the error landscape at the end of training limits the fraction of weights that can be pruned at each iteration of IMP. This analysis yields a new quantitative link between IMP performance and the Hessian eigenspectrum. Finally, we show that the role of retraining in IMP is to find a network with new small weights to prune. Overall, these results make progress toward demystifying the existence of winning tickets by revealing the fundamental role of error landscape geometry in the algorithms used to find them.

1. INTRODUCTION

Recent advances in deep learning have been driven by massively scaling both the size of networks and datasets (Kaplan et al., 2020; Hoffmann et al., 2022) . But this scale comes at considerable resource costs. For example, when training or deploying networks on edge devices, memory and computational demands must remain small. This motivates the search for sparse trainable networks (Blalock et al., 2020 ), pruned datasets (Paul et al., 2021; Sorscher et al., 2022) , or both (Paul et al., 2022) that can be used within these resource constraints. However, finding highly sparse, trainable networks is challenging. A state of the art algorithm for doing so is iterative magnitude pruning (IMP) (Frankle et al., 2020a) . IMP starts with a dense network usually pretrained for a very short amount of time. The weights of this network are called the rewind point. IMP then repeatedly (1) trains this network to convergence; (2) prunes the trained network by computing a mask that zeros out a fraction (typically 20%) of the smallest magnitude weights; (3) rewinds the nonzero weights back to their values at the rewind point, and then starts the next iteration by training the masked network to convergence. Each successive iteration yields a mask with higher sparsity. The final mask applied to the rewind point constitutes a highly sparse trainable subnetwork called a winning ticket if it trains to the same accuracy as the full network, i.e. is matching. * Equal contribution. Correspondence to: mansheej@stanford.edu; gkdz@google.com.

