PROXYLESSKD: DIRECT KNOWLEDGE DISTILLATION WITH INHERITED CLASSIFIER FOR FACE RECOGNI-TION

Abstract

Knowledge Distillation (KD) refers to transferring knowledge from a large model to a smaller one, which is widely used to enhance model performance in machine learning. It tries to align embedding spaces generated from the teacher and the student model (i.e. to make images corresponding to the same semantics share the same embedding across different models). In this work, we focus on its application in face recognition. We observe that existing knowledge distillation models optimize the proxy tasks that force the student to mimic the teacher's behavior, instead of directly optimizing the face recognition accuracy. Consequently, the obtained student models are not guaranteed to be optimal on the target task or able to benefit from advanced constraints, such as large margin constraint (e.g. margin-based softmax). We then propose a novel method named ProxylessKD that directly optimizes face recognition accuracy by inheriting the teacher's classifier as the student's classifier to guide the student to learn discriminative embeddings in the teacher's embedding space. The proposed ProxylessKD is very easy to implement and sufficiently generic to be extended to other tasks beyond face recognition. We conduct extensive experiments on standard face recognition benchmarks, and the results demonstrate that ProxylessKD achieves superior performance over existing knowledge distillation methods.

1. INTRODUCTION

Knowledge Distillation (KD) is a process of transferring knowledge from a large model to a smaller one. This technique is widely used to enhance model performance in many machine learning tasks such as image classification (Hinton et al., 2015) , object detection (Chen et al., 2017b) and speech translation (Liu et al., 2019c) . When applied to face recognition, the embeddings of a gallery are usually extracted by a larger teacher model while the embeddings of the query images are extracted by a smaller student model. The student model is encouraged to align its embedding space with that of the teacher, so as to improve its recognition capability. Previous KD works promote the consistency in final predictions (Hinton et al., 2015) , or in the activations of the hidden layer between student and teacher (Romero et al., 2014; Zagoruyko & Komodakis, 2016) . Such an idea of only optimizing the consistency in predictions or activations brings limited performance boost since the student is often a small model with weaker capacity compared with the teacher. Later, Park et al. (2019); Peng et al. (2019) propose to exploit the correlation between instances to guide the student to mimic feature relationships of the teacher over a batch of input data, which achieves better performance. However, the above works all aim at guiding the student to mimic the behavior of the teacher, which is not suitable for practical face recognition. In reality, it is very important to directly align embedding spaces between student and teacher, which can enable models across different devices to share the same embedding space for feasible similarity comparison. To solve this, a simple and direct method is to directly minimize the L2 distance of embeddings extracted by student and teacher. However, this method (we call it L2KD) only considers minimizing the intra-class distance and ignores maximizing the inter-class distance, and is unable to benefit from some powerful loss functions with large margin (e.g. Cosface loss (Wang et al., 2018a) , Arcface loss (Deng et al., 2019a) ) constraint to further improve the performance. ). As shown in Figure 1 , the intra-class distance in ProxylessKD combined with Arcface loss is much closer than L2KD, and the inter-class distance in ProxylessKD combined with Arcface loss is much larger than L2KD. Thus it can be expected that our ProxylessKD is able to improve the performance of face recognition, which will be experimentally validated. The main contributions in this paper are summarized as follows: • We analyze the shortcomings of existing knowledge distillation methods: they only optimize the proxy task rather than the target task; and they cannot conveniently integrate with advanced large margin constraints to further lift performance. • We propose a simple yet effective KD method named ProxylessKD, which directly boosts embedding space alignment and can be easily combined with existing loss functions to achieve better performance. • We conduct extensive experiments on standard face recognition benchmarks, and the results well demonstrate the effectiveness of the proposed ProxylessKD. et al., 2019) . The previous works mostly optimize the proxy tasks rather than the target task. In this work, we directly optimize face recognition accuracy by inheriting the teacher's classifier as the student's classifier to guide the student to learn discriminative embeddings in the teacher's embedding space. In (Deng et al., 2019b) , they also directly copy and fix the weights of the margin inner-product layer of the teacher model to the student model to train the student model and the motivation of (Deng et al., 2019b) is the student model can be trained with better pre-defined inter-class information from the teacher model. However, different from (Deng et al., 2019b) , we firstly analyze the shortcomings of existing knowledge distillation methods. Specifically, the existing methods target optimizing the proxy task rather than



Figure 1: The embedding distributions extracted by (a) L2KD, and (b) ProxylessKDIn this work, we propose an effective knowledge distillation method named ProxylessKD. According toRanjan et al. (2017), the classifier neurons in a recognition model can be viewed as the approximate embedding centers of each class. This can be used to guide the embedding learning as in this way, the classifier can encourage the embedding to align with the approximate embedding centers corresponding to the label of the image. Inspired by this, we propose to initialize the weight of the student's classifier with the weight of the teacher's classifier and fix it during the distillation process, which forces the student to produce an embedding space as consistent with that of the teacher as possible. Different from previous knowledge distillation works(Hinton et al., 2015; Zagoruyko & Komodakis, 2016; Romero et al., 2014; Park  et al., 2019; Peng et al., 2019)  and L2KD, the proposed ProxylessKD not only directly optimizes the target task but also considers minimizing the intra-class distance and maximizing the inter-class distance. Meanwhile it can benefit from large margin constraints (e.g. Cosface loss(Wang et al.,  2018a)  and Arcface loss(Deng et al., 2019a)). As shown in Figure1, the intra-class distance in ProxylessKD combined with Arcface loss is much closer than L2KD, and the inter-class distance in ProxylessKD combined with Arcface loss is much larger than L2KD. Thus it can be expected that our ProxylessKD is able to improve the performance of face recognition, which will be experimentally validated.

Knowledge distillation. Knowledge distillation aims to transfer the knowledge from the teacher model to a small model. The pioneer work isBuciluǎ et al. (2006), and Hinton et al. (2015)  popularizes this idea by defining the concept of knowledge distillation (KD) as training the small model (the student) by exploiting the soft targets provided by a cumbersome model (the teacher). Unlike the one-hot label, the soft targets from the teacher contain rich related information among classes, which can guide the student to better learn the fine-grained distribution of data and thus lift performance. Lots of variants of model distillation strategies have been proposed and widely adopted in the fields like image classification(Chen et al., 2018), object detection (Chen et al., 2017a), semantic segmentation (Liu et al., 2019a; Park & Heo, 2020), etc. Concretely, Zagoruyko & Komodakis (2016) proposed a response-based KD model, Attention Transfer (AT), which aims to teach the student to activate the same region as the teacher model. Some relation-based distillation methods have also been developed, which encourage the student to mimic the relation of the output in different stages (Yim et al., 2017) and the samples in a batch (Park

