PROXYLESSKD: DIRECT KNOWLEDGE DISTILLATION WITH INHERITED CLASSIFIER FOR FACE RECOGNI-TION

Abstract

Knowledge Distillation (KD) refers to transferring knowledge from a large model to a smaller one, which is widely used to enhance model performance in machine learning. It tries to align embedding spaces generated from the teacher and the student model (i.e. to make images corresponding to the same semantics share the same embedding across different models). In this work, we focus on its application in face recognition. We observe that existing knowledge distillation models optimize the proxy tasks that force the student to mimic the teacher's behavior, instead of directly optimizing the face recognition accuracy. Consequently, the obtained student models are not guaranteed to be optimal on the target task or able to benefit from advanced constraints, such as large margin constraint (e.g. margin-based softmax). We then propose a novel method named ProxylessKD that directly optimizes face recognition accuracy by inheriting the teacher's classifier as the student's classifier to guide the student to learn discriminative embeddings in the teacher's embedding space. The proposed ProxylessKD is very easy to implement and sufficiently generic to be extended to other tasks beyond face recognition. We conduct extensive experiments on standard face recognition benchmarks, and the results demonstrate that ProxylessKD achieves superior performance over existing knowledge distillation methods.

1. INTRODUCTION

Knowledge Distillation (KD) is a process of transferring knowledge from a large model to a smaller one. This technique is widely used to enhance model performance in many machine learning tasks such as image classification (Hinton et al., 2015) , object detection (Chen et al., 2017b) and speech translation (Liu et al., 2019c) . When applied to face recognition, the embeddings of a gallery are usually extracted by a larger teacher model while the embeddings of the query images are extracted by a smaller student model. The student model is encouraged to align its embedding space with that of the teacher, so as to improve its recognition capability. Previous KD works promote the consistency in final predictions (Hinton et al., 2015) , or in the activations of the hidden layer between student and teacher (Romero et al., 2014; Zagoruyko & Komodakis, 2016) . Such an idea of only optimizing the consistency in predictions or activations brings limited performance boost since the student is often a small model with weaker capacity compared with the teacher. Later, Park et al. (2019); Peng et al. (2019) propose to exploit the correlation between instances to guide the student to mimic feature relationships of the teacher over a batch of input data, which achieves better performance. However, the above works all aim at guiding the student to mimic the behavior of the teacher, which is not suitable for practical face recognition. In reality, it is very important to directly align embedding spaces between student and teacher, which can enable models across different devices to share the same embedding space for feasible similarity comparison. To solve this, a simple and direct method is to directly minimize the L2 distance of embeddings extracted by student and teacher. However, this method (we call it L2KD) only considers minimizing the intra-class distance and ignores maximizing the inter-class distance, and is unable to benefit from some powerful loss functions with large margin (e.g. Cosface loss (Wang et al., 2018a) , Arcface loss (Deng et al., 2019a) ) constraint to further improve the performance. 1

