NORM: KNOWLEDGE DISTILLATION VIA N-TO-ONE REPRESENTATION MATCHING

Abstract

Existing feature distillation methods commonly adopt the One-to-one Representation Matching between any pre-selected teacher-student layer pair. In this paper, we present N-to-One Representation Matching (NORM), a new two-stage knowledge distillation method, which relies on a simple Feature Transform (FT) module consisting of two linear layers. In view of preserving the intact information learnt by the teacher network, during training, our FT module is merely inserted after the last convolutional layer of the student network. The first linear layer projects the student representation to a feature space having N times feature channels than the teacher representation from the last convolutional layer, and the second linear layer contracts the expanded output back to the original feature space. By sequentially splitting the expanded student representation into N non-overlapping feature segments having the same number of feature channels as the teacher's, they can be readily forced to approximate the intact teacher representation simultaneously, formulating a novel many-to-one representation matching mechanism conditioned on a single teacher-student layer pair. After training, such an FT module will be naturally merged into the subsequent fully connected layer thanks to its linear property, introducing no extra parameters or architectural modifications to the student network at inference. Extensive experiments on different visual recognition benchmarks demonstrate the leading performance of our method. For instance, the ResNet18|MobileNet|ResNet50-1/4 model trained by NORM reaches 72.14%|74.26%|68.03% top-1 accuracy on the ImageNet dataset when using a pretrained ResNet34|ResNet50|ResNet50 model as the teacher, achieving an absolute improvement of 2.01%|4.63%|3.03% against the individually trained counterpart. Code is available at https://github.com/OSVAI/NORM.

1. INTRODUCTION

Knowledge distillation (KD), an effective way to train compact yet accurate neural networks through knowledge transfer, has attracted increasing research attention recently. Bucilǎ et al. (2006) and Ba & Caruana (2014) made early attempts in this direction. Hinton et al. (2015) presented the well-known KD using a teacher-student framework, which starts with pre-training a large network (teacher), and then trains a smaller target network (student) on the same dataset by forcing it to match the logits predicted by the teacher model. Many subsequent methods follow this two-stage KD scheme but use hidden layer features as extra knowledge, while others use a one-stage KD scheme in which teacher and student networks are trained from scratch jointly (Guo et al., 2021) . In this paper, we focus on two-stage feature distillation (FD) research, mainly for supervised image classification tasks. Existing two-stage FD methods primarily use feature maps (Romero et al., 2015) , or attention maps (Zagoruyko & Komodakis, 2017) , or other forms of features (Chen et al., 2021a) at one or multiple hidden layers as knowledge representations. Generally, modern neural network architectures engineered on the ImageNet classification dataset (Russakovsky et al., 2015) adopt a multi-stage design paradigm. At a pair of the same staged layers, a teacher network typically has more output feature channels than a student network while keeping the same spatial feature size. All feature

