Multi-Prize Lottery Ticket Hypothesis: FINDING ACCURATE BINARY NEURAL NETWORKS BY PRUNING A RANDOMLY WEIGHTED NETWORK

Abstract

Recently, Frankle & Carbin (2019) demonstrated that randomly-initialized dense networks contain subnetworks that once found can be trained to reach test accuracy comparable to the trained dense network. However, finding these high performing trainable subnetworks is expensive, requiring iterative process of training and pruning weights. In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i.e., binary weights and/or activation) (prize 3). This provides a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks. We also propose an algorithm for finding multi-prize tickets (MPTs) and test it by performing a series of experiments on CIFAR-10 and ImageNet datasets. Empirical results indicate that as models grow deeper and wider, multi-prize tickets start to reach similar (and sometimes even higher) test accuracy compared to their significantly larger and full-precision counterparts that have been weight-trained. Without ever updating the weight values, our MPTs-1/32 not only set new binary weight network state-of-the-art (SOTA) Top-1 accuracy -94.8% on CIFAR-10 and 74.03% on ImageNet -but also outperform their full-precision counterparts by 1.78% and 0.76%, respectively. Further, our MPT-1/1 achieves SOTA Top-1 accuracy (91.9%) for binary neural networks on CIFAR-10. Code and pre-trained models are available at: https: //github.com/chrundle/biprop.

1. INTRODUCTION

Deep learning (DL) has made a significant breakthroughs in a wide range of applications (Goodfellow et al., 2016) . These performance improvements can be attributed to the significant growth in the model size and the availability of massive computational resources to train such models. Therefore, these gains have come at the cost of large memory consumption, high inference time, and increased power consumption. This not only limits the potential applications where DL can make an impact but also have some serious consequences, such as, (a) generating huge carbon footprint, and (b) creating roadblocks to the democratization of AI. Note that significant parameter redundancy and a large number of floating-point operations are key factors incurring the these costs. Thus, for discarding the redundancy from DNNs, one can either (a) Prune: remove non-essential connections from an existing dense network, or (b) Quantize: constrain the full-precision (FP) weight and activation values to a set of discrete values which allows them to be represented using fewer bits. Further, one can exploit the complementary nature of pruning and quantization to combine their strengths. 1

