NORM: KNOWLEDGE DISTILLATION VIA N-TO-ONE REPRESENTATION MATCHING

Abstract

Existing feature distillation methods commonly adopt the One-to-one Representation Matching between any pre-selected teacher-student layer pair. In this paper, we present N-to-One Representation Matching (NORM), a new two-stage knowledge distillation method, which relies on a simple Feature Transform (FT) module consisting of two linear layers. In view of preserving the intact information learnt by the teacher network, during training, our FT module is merely inserted after the last convolutional layer of the student network. The first linear layer projects the student representation to a feature space having N times feature channels than the teacher representation from the last convolutional layer, and the second linear layer contracts the expanded output back to the original feature space. By sequentially splitting the expanded student representation into N non-overlapping feature segments having the same number of feature channels as the teacher's, they can be readily forced to approximate the intact teacher representation simultaneously, formulating a novel many-to-one representation matching mechanism conditioned on a single teacher-student layer pair. After training, such an FT module will be naturally merged into the subsequent fully connected layer thanks to its linear property, introducing no extra parameters or architectural modifications to the student network at inference. Extensive experiments on different visual recognition benchmarks demonstrate the leading performance of our method. For instance, the ResNet18|MobileNet|ResNet50-1/4 model trained by NORM reaches 72.14%|74.26%|68.03% top-1 accuracy on the ImageNet dataset when using a pretrained ResNet34|ResNet50|ResNet50 model as the teacher, achieving an absolute improvement of 2.01%|4.63%|3.03% against the individually trained counterpart.

1. INTRODUCTION

Knowledge distillation (KD), an effective way to train compact yet accurate neural networks through knowledge transfer, has attracted increasing research attention recently. Bucilǎ et al. (2006) and Ba & Caruana (2014) made early attempts in this direction. Hinton et al. (2015) presented the well-known KD using a teacher-student framework, which starts with pre-training a large network (teacher), and then trains a smaller target network (student) on the same dataset by forcing it to match the logits predicted by the teacher model. Many subsequent methods follow this two-stage KD scheme but use hidden layer features as extra knowledge, while others use a one-stage KD scheme in which teacher and student networks are trained from scratch jointly (Guo et al., 2021) . In this paper, we focus on two-stage feature distillation (FD) research, mainly for supervised image classification tasks. Existing two-stage FD methods primarily use feature maps (Romero et al., 2015) , or attention maps (Zagoruyko & Komodakis, 2017) , or other forms of features (Chen et al., 2021a) at one or multiple hidden layers as knowledge representations. Generally, modern neural network architectures engineered on the ImageNet classification dataset (Russakovsky et al., 2015) adopt a multi-stage design paradigm. At a pair of the same staged layers, a teacher network typically has more output feature channels than a student network while keeping the same spatial feature size. All feature dimensions may be different at a cross-stage layer pair. To align the feature dimensions, there have been many feature transform (FT) designs. However, prevailing teacher FT designs cause information loss due to dimension reduction, as studied in (Heo et al., 2019a; Tian et al., 2020) . More importantly, we observe that existing FD methods adopt the One-to-one Representation Matching (ORM) between each pre-selected teacher-student layer pair, indicating that only one knowledge transfer route is introduced. We argue that this leaves considerable room to promote two-stage FD research. Driven by the above analysis, in this paper, we present a new two-stage feature distillation method dubbed N-to-One Representation Matching (NORM) that relies on a simple FT module consisting of two linear layers. An architectural comparison of popular ORM schemes and NORM is depicted in Figure 1 . When formulating NORM, we leverage three basic principles: (1) using as few FTs as possible; (2) enabling many-to-one feature mimicking flow via student representation expansion and splitting; (3) making FT module absorbable. With the first principle, our FT module is merely inserted after the last convolutional layer of the student network. In this way, the intact information learnt by the teacher network is preserved, and knowledge transfer flow only needs to be considered between the last convolutional layer pair. With the second principle, our FT module starts with a linear layer that projects the student representation to a feature space having N times feature channels than the teacher representation. This allows NORM to introduce many parallel knowledge transfer routes between a single teacher-student layer pair via simple student feature splitting and group-wise feature mimicking operations. With the third principle, our FT module ends with another linear layer that projects the expanded student representation back to the original feature space, and it does not contain any non-linear activation functions, making all of its operations linear. As a result, after training the FT module can be directly merged into its subsequent fully connected layer, without introducing any extra parameters or architectural modifications to the student network at inference. We evaluate the performance of NORM on different visual recognition benchmarks. On the CIFAR-100 dataset, the student models trained by NORM show a mean accuracy improvement of 2.88% over 7 teacher-student pairs of the same type network architectures. Over 6 teacher-student pairs of different type network architectures, the mean accuracy improvement reaches 5.81%, and the maximal gain is 6.92%. Leading results are obtained on the large-scale ImageNet dataset. With NORM, the ResNet18|MobileNet|ResNet50-1/4 model reaches 72.14%|74.26%|68.03% top-1 accuracy when using a pre-trained ResNet34|ResNet50|ResNet50 model as the teacher, showing 2.01%|4.63%|3.03% absolute gain to the baseline model. Thanks to its simplicity and compatibility, we show that improved performance could be further attained by combining NORM with other popular distillation strategies like logits based supervision (Hinton et al., 2015) and contrastive learning (Tian et al., 2020) .



Figure 1: An architectural comparison of prevailing One-to-one Representation Matching (ORM) schemes (left figure) and our N-to-One Representation Matching (NORM, right figure). Based on ORM, existing feature distillation methods apply their feature transforms (FTs) to (1) the student network, or (2) the teacher network, or (3) both of them, at any pre-selected hidden layer pair. Unlike them, NORM leverages a simple linear FT module added after the last convolutional layer of the student network to formulate a many-to-one representation matching scheme via feature expansion, splitting and mimicking. For inference, our FT module will be merged into its subsequent fully connected layer, introducing no extra parameters or architectural modifications to the student network. Best viewed with zoom in, and see the Method section for a detailed formulation of NORM.

