A GENERAL FRAMEWORK FOR PROVING THE EQUIV-ARIANT STRONG LOTTERY TICKET HYPOTHESIS

Abstract

The Strong Lottery Ticket Hypothesis (SLTH) stipulates the existence of a subnetwork within a sufficiently overparameterized (dense) neural network that-when initialized randomly and without any training-achieves the accuracy of a fully trained target network. Recent works by da Cunha et al. (2022b); Burkholz (2022a) demonstrate that the SLTH can be extended to translation equivariant networks-i.e. CNNs-with the same level of overparametrization as needed for the SLTs in dense networks. However, modern neural networks are capable of incorporating more than just translation symmetry, and developing general equivariant architectures such as rotation and permutation has been a powerful design principle. In this paper, we generalize the SLTH to functions that preserve the action of the group G-i.e. G-equivariant network-and prove, with high probability, that one can approximate any G-equivariant network of fixed width and depth by pruning a randomly initialized overparametrized G-equivariant network to a G-equivariant subnetwork. We further prove that our prescribed overparametrization scheme is optimal and provides a lower bound on the number of effective parameters as a function of the error tolerance. We develop our theory for a large range of groups, including subgroups of the Euclidean E(2) and Symmetric group G ≤ S n -allowing us to find SLTs for MLPs, CNNs, E(2)-steerable CNNs, and permutation equivariant networks as specific instantiations of our unified framework. Empirically, we verify our theory by pruning overparametrized E(2)-steerable CNNs, k-order GNNs, and message passing GNNs to match the performance of trained target networks.

1. INTRODUCTION

Many problems in deep learning benefit from massive amounts of annotated data and compute that enables the training of models with an excess of a billion parameters. Despite this appeal of overparametrization many real-world applications are resource-constrained (e.g., on device) and demand a reduced computational footprint for both training and deployment (Deng et al., 2020) . A natural question that arises in these settings is then: is it possible to marry the benefits of large models-empirically beneficial for effective training-to the computational efficiencies of smaller sparse models? A standard line of work for building compressed models from larger fully trained networks with minimal loss in accuracy is via weight pruning (Blalock et al., 2020) . There is, however, increasing empirical evidence to suggest weight pruning can occur significantly prior to full model convergence. Frankle and Carbin (2019) postulate the extreme scenario termed lottery ticket hypothesis (LTH) where a subnetwork extracted at initialization can be trained to the accuracy of the parent network-in effect "winning" the weight initialization lottery. In an even more striking phenomenon Ramanujan et al. (2020) find that not only do such sparse subnetworks exist at initialization but they already achieve impressive performance without any training. This remarkable occurrence termed the

