OVIT: AN ACCURATE SECOND-ORDER PRUNING FRAMEWORK FOR VISION TRANSFORMERS

Abstract

Models from the Vision Transformer (ViT) family have recently provided breakthrough results across image classification tasks such as ImageNet. Yet, they still face barriers to deployment, notably the fact that their accuracy can be severely impacted by compression techniques such as pruning. In this paper, we take a step towards addressing this issue by introducing Optimal ViT Surgeon (oViT), a new state-of-the-art weight sparsification method, which is particularly well-suited to Vision Transformers (ViT) models. At the technical level, oViT introduces a new weight pruning algorithm which leverages second-order information, and in particular can handle weight correlations accurately and efficiently. We complement this accurate one-shot pruner with an in-depth investigation of gradual pruning, augmentation, and recovery schedules for ViTs, which we show to be critical for successful compression. We validate our method via extensive experiments on classical ViT and DeiT models, hybrid architectures, such as XCiT, EfficientFormer and Swin, as well as general models, such as highly-accurate ResNet and Efficient-Net variants. Our results show for the first time that ViT-family models can in fact be pruned to high sparsity levels (e.g. ≥ 75%) with low impact on accuracy (≤ 1% relative drop). In addition, we show that our method is compatible with structured pruning methods and quantization, and that it can lead to significant speedups on a sparsity-aware inference engine.

1. INTRODUCTION

Attention-based Transformers (Vaswani et al., 2017) have revolutionized natural language processing (NLP), and have become popular recently also in computer vision (Dosovitskiy et al., 2020; Touvron et al., 2021; Carion et al., 2020) . The Vision Transformer (ViT) (Dosovitskiy et al., 2020; Touvron et al., 2021) and its extensions (Ali et al., 2021; Liu et al., 2021; Wang et al., 2021) which are the focus of our study, have been remarkably successful, despite encoding fewer inductive biases. However, the high accuracy of ViTs comes at the cost of large computational and parameter budgets. In particular, ViT models are well-known to be more parameter-heavy (Dosovitskiy et al., 2020; Touvron et al., 2021) , relative to their convolutional counterparts. Consequently, a rapidly-expanding line of work has been focusing on reducing these costs for ViT models via model compression, thus enabling their deployment in resource-constrained settings. Several recent references adapted compression approaches to ViT models, investigating either structured pruning, removing patches or tokens, or unstructured pruning, removing weights. The consensus in the literature is that ViT models are generally less compressible than convolutional networks (CNNs) at the same accuracy. If the classic ResNet50 model (He et al., 2016) can be compressed to 80-90% sparsity with negligible loss of accuracy, e.g. (Frantar et al., 2021; Peste et al., 2021) , the best currently-known results for similarly-accurate ViT models can only reach at most 50% sparsity while maintaining dense accuracy (Chen et al., 2021) . It is therefore natural to ask whether this "lack of compressibility" is an inherent limitation of ViTs, or whether better results can be obtained via improved compression methods designed for these architectures. Contributions. In this paper, we propose a new pruning method called Optimal ViT Surgeon (oViT) , which improves the state-of-the-art accuracy-vs-sparsity trade-off for ViT family models, and shows that they can be pruned to similar levels as CNNs. Our work is based on an in-depth investigation of ViT performance under pruning, and provides contributions across three main directions: • A new second-order sparse projection. To address the fact that ViTs tend to lose significant accuracy upon each pruning step, we introduce a novel approximate second-order pruner called oViT, inspired by the classical second-order OBS framework (Hassibi et al., 1993) . The key new feature of our pruner is that, for the first time, it can handle weight correlations 1

