A GRADIENT FLOW FRAMEWORK FOR ANALYZING NETWORK PRUNING

Abstract

Recent network pruning methods focus on pruning models early-on in training. To estimate the impact of removing a parameter, these methods use importance measures that were originally designed to prune trained models. Despite lacking justification for their use early-on in training, such measures result in surprisingly low accuracy loss. To better explain this behavior, we develop a general framework that uses gradient flow to unify state-of-the-art importance measures through the norm of model parameters. We use this framework to determine the relationship between pruning measures and evolution of model parameters, establishing several results related to pruning models early-on in training: (i) magnitude-based pruning removes parameters that contribute least to reduction in loss, resulting in models that converge faster than magnitude-agnostic methods; (ii) loss-preservation based pruning preserves first-order model evolution dynamics and its use is therefore justified for pruning minimally trained models; and (iii) gradient-norm based pruning affects second-order model evolution dynamics, such that increasing gradient norm via pruning can produce poorly performing models. We validate our claims on several VGG-13, MobileNet-V1, and ResNet-56 models trained on CIFAR-10/CIFAR-100.

1. INTRODUCTION

The use of Deep Neural Networks (DNNs) in intelligent edge systems has been enabled by extensive research on model compression. "Pruning" techniques are commonly used to remove "unimportant" filters to either preserve or promote specific, desirable model properties. Most pruning methods were originally designed to compress trained models, with the goal of reducing inference costs only. For example, Li et al. ( 2017 2020)), thus enabling reduction in both inference and training costs. To estimate the impact of removing a parameter, these methods use the same importance measures as designed for pruning trained models. Since such measures focus on preserving model outputs or loss, Wang et al. (2020) argue they are not well-motivated for pruning models early-on in training. However, in this paper, we demonstrate that if the relationship between importance measures used for pruning trained models and the evolution of model parameters is established, their use early-on in training can be better justified. In particular, we employ gradient flowfoot_0 to develop a general framework that relates state-of-theart importance measures used in network pruning through the norm of model parameters. This framework establishes the relationship between regularly used importance measures and the evolution of a model's parameters, thus demonstrating why measures designed to prune trained models also



gradient flow refers to gradient descent with infinitesimal learning rate; see Equation 6 for a short primer.1



); He et al. (2018) proposed to remove filters with small 1/ 2 norm, thus ensuring minimal change in model output. Molchanov et al. (2017; 2019); Theis et al. (2018) proposed to preserve the loss of a model, generally using Taylor expansions around a filter's parameters to estimate change in loss as a function of its removal. Recent works focus on pruning models at initialization (Lee et al. (2019; 2020)) or after minimal training (You et al. (

