MASKED DISTILLATION WITH RECEPTIVE TOKENS

Abstract

Distilling from the feature maps can be fairly effective for dense prediction tasks since both the feature discriminability and localization priors can be well transferred. However, not every pixel contributes equally to the performance, and a good student should learn from what really matters to the teacher. In this paper, we introduce a learnable embedding dubbed receptive token to localize those pixels of interests (PoIs) in the feature map, with a distillation mask generated via pixel-wise attention. Then the distillation will be performed on the mask via pixel-wise reconstruction. In this way, a distillation mask actually indicates a pattern of pixel dependencies within feature maps of teacher. We thus adopt multiple receptive tokens to investigate more sophisticated and informative pixel dependencies to further enhance the distillation. To obtain a group of masks, the receptive tokens are learned via the regular task loss but with teacher fixed, and we also leverage a Dice loss to enrich the diversity of learned masks. Our method dubbed MasKD is simple and practical, and needs no priors of tasks in application. Experiments show that our MasKD can achieve state-of-the-art performance consistently on object detection and semantic segmentation benchmarks.

1. INTRODUCTION

Recent deep learning models tend to grow deeper and wider for ultimate performance (He et al., 2016; Xie et al., 2017; Li et al., 2019) . However, with the limitations of computational and memory resources, such huge models are clumsy and inefficient to deploy on edge devices. As a friendly solution, knowledge distillation (KD) (Hinton et al., 2015; Romero et al., 2014) has been proposed to transfer knowledge in the heavy model (teacher) to a small model (student). Nevertheless, applying KD on dense prediction tasks such as object detection and semantic segmentation sometimes cannot achieve significant improvements as expected. For example, Fitnet (Romero et al., 2014) mimics the feature maps of teacher element-wisely but it has only minor improvement in object detectionfoot_0 . Therefore, feature reconstruction for all pixels may not be a good option for dense prediction, since not every pixel contributes equally to the performance. Many followups (Li et al., 2017; Wang et al., 2019; Sun et al., 2020; Guo et al., 2021) thus dedicated to show that distillation on sampled valuable regions could achieve noticeable improvements over the simple baseline methods. For example, Mimicking (Li et al., 2017) distills the positive regions proposed by region proposal network (RPN) of the student; FGFI (Wang et al., 2019) and TADF (Sun et al., 2020) imitate valuable regions near the foreground boxes; Defeat (Guo et al., 2021) uses ground-truth bounding boxes to balance the loss weights of foreground and background distillations; GID (Dai et al., 2021) selects valuable regions according to the outputs of teacher and student. These methods all rely on the priors of bounding boxes; however, are all pixels inside the bounding boxes necessarily valuable for distillation? The answer might be negative. As shown in Figure 1 , the activated regions inside each object box are much smaller than the boxes. Also, different layers, even different strides of features in FPN,



Fitnet (Romero et al., 2014) improves Faster RCNN-R50 by only 0.5%, while has no gain on RetinaNet-R50 (see Table1).

availability

//github.com/hunto/MasKD.

