LEARNED THRESHOLD PRUNING

Abstract

This paper presents a novel differentiable method for unstructured weight pruning of deep neural networks. Our learned-threshold pruning (LTP) method learns perlayer thresholds via gradient descent, unlike conventional methods where they are set as input. Making thresholds trainable also makes LTP computationally efficient, hence scalable to deeper networks. For example, it takes 30 epochs for LTP to prune ResNet50 on ImageNet by a factor of 9.1. This is in contrast to other methods that search for per-layer thresholds via a computationally intensive iterative pruning and fine-tuning process. Additionally, with a novel differentiable L 0 regularization, LTP is able to operate effectively on architectures with batch-normalization. This is important since L 1 and L 2 penalties lose their regularizing effect in networks with batch-normalization. Finally, LTP generates a trail of progressively sparser networks from which the desired pruned network can be picked based on sparsity and performance requirements. These features allow LTP to achieve competitive compression rates on ImageNet networks such as AlexNet (26.4× compression with 79.1% Top-5 accuracy) and ResNet50 (9.1× compression with 92.0% Top-5 accuracy). We also show that LTP effectively prunes modern compact architectures, such as EfficientNet, MobileNetV2 and MixNet.

1. INTRODUCTION

Deep neural networks (DNNs) have provided state-of-the-art solutions for several challenging tasks in many domains such as computer vision, natural language understanding, and speech processing. With the increasing demand for deploying DNNs on resource-constrained edge devices, it has become even more critical to reduce the memory footprint of neural networks and also to achieve power-efficient inference on these devices. Many methods in model compression Hassibi et al. (1993) ; LeCun et al. (1989) 2018) rely on removing individual weights from the neural network. Although unstructured pruning methods achieve much higher weight sparsity ratio than structured pruning, unstructured is thought to be less hardware friendly because the irregular sparsity is often difficult to exploit for efficient computation Anwar et al. (2017) . However, recent advances in AI accelerator design Ignatov et al. (2018) have targeted support for highly efficient sparse matrix multiply-and-accumulate operations. Because of this, it is getting increasingly important to develop state-of-the-art algorithms for unstructured pruning. Most unstructured weight pruning methods are based on the assumption that smaller weights do not contribute as much to the model's performance. These pruning methods iteratively prune the weights that are smaller than a certain threshold and retrain the network to regain the performance lost during pruning. A key challenge in unstructured pruning is to find an optimal setting for these pruning thresholds. Merely setting the same threshold for all layers may not be appropriate because the distribution and ranges of the weights in each layer can be very different. Also, different layers may have varying sensitivities to pruning, depending on their position in the network (initial layers versus final layers) or their type (depth-wise separable versus standard convolutional layers). The 2018) propose a way to search these layer-wise thresholds but become quite computationally expensive for networks with a large number of layers, such as ResNet50 or EfficientNet. In this paper, we propose Learned Threshold Pruning (LTP) to address these challenges. Our proposed method uses separate pruning thresholds for every layer. We make the layer-wise thresholds trainable, allowing the training procedure to find optimal thresholds alongside the layer weights during finetuning. An added benefit of making these thresholds trainable is that it makes LTP fast, and the method converges quickly compared to other iterative methods such as Zhang et al. ( 2018 Our key contributions in this work are the following: • We propose a gradient-based algorithm for unstructured pruning, that introduces a learnable threshold parameter for every layer. This threshold is trained jointly with the layer weights. We use soft-pruning and soft L 0 regularization to make this process end-to-end trainable. • We show that making layer-wise thresholds trainable makes LTP computationally very efficient compared to other methods that search for per-layer thresholds via an iterative pruning and finetuning process, e.g., LTP pruned ResNet50 to 9.11x in just 18 epochs with 12 additional epochs of fine-tuning, and MixNet-S to 2x in 17 epochs without need for further finetuning. • We demonstrate state-of-the-art compression ratios on newer architectures, i.e., 1.33×, 3× and 2× for MobileNetV2, EfficientNet-B0 and MixNet-S, respectively, which are already optimized for efficient inference, with less than 1% drop in Top-1 accuracy. • The proposed method provides a trace of checkpoints with varying pruning ratios and accuracies. Because of this, the user can choose any desired checkpoint based on the sparsity and performance requirements for the desired application. 2019). These papers apply the alternating method of Lagrange multipliers to pruning, which slowly coaxes a network into pruning weights with a L2-regularization-like term. One problem of these methods is that they are



; Han et al. (2015b); Zhang et al. (2018), model quantization Jacob et al. (2018); Lin et al. (2016); Zhou et al. (2017); Faraone et al. (2018) and neural architecture search Sandler et al. (2018); Tan & Le (2019a); Cai et al. (2018); Wu et al. (2019) have been introduced with these goals in mind. Neural network compression mainly falls into two categories: structured and unstructured pruning. Structured pruning methods, e.g., He et al. (2017); Li et al. (2017); Zhang et al. (2016); He et al. (2018), change the network's architecture by removing input channels from convolutional layers or by applying tensor decomposition to the layer weight matrices whereas unstructured pruning methods such as Han et al. (2015b); Frankle & Carbin (2019); Zhang et al. (

best setting of thresholds should consider these layer-wise characteristics. Many methods Zhang et al. (2018); Ye et al. (2019); Manessi et al. (

); Ye et al. (2019). LTP also achieves high compression on newer networks Tan & Le (2019a); Sandler et al. (2018); Tan & Le (2019b) with squeeze-excite Hu et al. (2018) and depth-wise convolutional layers Chollet (2017).

Several methods have been proposed for both structured and unstructured pruning of deep networks. Methods like He et al. (2017); Li et al. (2017) use layer-wise statistics and data to remove input channels from convolutional layers. Other methods apply tensor decompositions on neural network layers, Denton et al. (2014); Jaderberg et al. (2014); Zhang et al. (2016) apply SVD to decompose weight matrices and Kim et al. (2015); Lebedev et al. (2014) apply tucker and cp-decompositions to compress. An overview of these methods can be found in Kuzmin et al. (2019). These methods are all applied after training a network and need fine-tuning afterwards. Other structured methods change the shape of a neural network while training. Methods like Bayesian Compression Louizos et al. (2017), VIBnets Dai et al. (2018) and L1/L0-regularization Srinivas et al. (2017); Louizos et al. (2018) add trainable gates to each layer to prune while training. In this paper we consider unstructured pruning, i.e. removing individual weights from a network. This type of pruning was already in use in 1989 in the optimal brain damage LeCun et al. (1989) and optimal brain surgeon Hassibi et al. (1993) papers, which removed individual weights in neural networks by use of Hessian information. More recently, Han et al. (2015a) used the method from Han et al. (2015b) as part of their full model compression pipeline, removing weights with small magnitudes and fine-tuning afterwards. This type of method is frequently used for pruning, and has recently been picked up for finding DNN subnetworks that work just as well as their mother network in Frankle & Carbin (2019); Zhou et al. (2019). Another recent application of Han et al. (2015b) is by Renda et al. (2020) where weight and learning-rate rewinding schemes are used to achieve competitive pruning performances. These methods, however, are very computationally extensive requiring many hundreds of epochs of re-training. Finally, papers such as Molchanov et al. (2017); Ullrich et al. (2017) apply a variational Bayesian framework on network pruning. Other methods that are similar to our work are Zhang et al. (2018) and Ye et al. (

