INPL: PSEUDO-LABELING THE INLIERS FIRST FOR IM-BALANCED SEMI-SUPERVISED LEARNING

Abstract

Recent state-of-the-art methods in imbalanced semi-supervised learning (SSL) rely on confidence-based pseudo-labeling with consistency regularization. To obtain high-quality pseudo-labels, a high confidence threshold is typically adopted. However, it has been shown that softmax-based confidence scores in deep networks can be arbitrarily high for samples far from the training data, and thus, the pseudo-labels for even high-confidence unlabeled samples may still be unreliable. In this work, we present a new perspective of pseudo-labeling for imbalanced SSL. Without relying on model confidence, we propose to measure whether an unlabeled sample is likely to be "in-distribution"; i.e., close to the current training data. To decide whether an unlabeled sample is "in-distribution" or "out-of-distribution", we adopt the energy score from out-of-distribution detection literature. As training progresses and more unlabeled samples become in-distribution and contribute to training, the combined labeled and pseudo-labeled data can better approximate the true class distribution to improve the model. Experiments demonstrate that our energy-based pseudo-labeling method, InPL, albeit conceptually simple, significantly outperforms confidence-based methods on imbalanced SSL benchmarks. For example, it produces around 3% absolute accuracy improvement on CIFAR10-LT. When combined with state-of-the-art long-tailed SSL methods, further improvements are attained. In particular, in one of the most challenging scenarios, InPL achieves a 6.9% accuracy improvement over the best competitor.

1. INTRODUCTION

In recent years, the frontier of semi-supervised learning (SSL) has seen significant advances through pseudo-labeling (Rosenberg et al., 2005; Lee et al., 2013) combined with consistency regularization (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Berthelot et al., 2020; Sohn et al., 2020; Xie et al., 2020a) . Pseudo-labeling, a type of self-training (Scudder, 1965; McLachlan, 1975) technique, converts model predictions on unlabeled samples into soft or hard labels as optimization targets, while consistency regularization (Laine & Aila, 2017; Tarvainen & Valpola, 2017; Berthelot et al., 2019; 2020; Sohn et al., 2020; Xie et al., 2020a) trains a model to produce the same outputs for two different views (e.g, strong and weak augmentations) of an unlabeled sample. However, most methods are designed for the balanced SSL setting where each class has a similar number of training samples, whereas most real-world data are naturally imbalanced, often following a long-tailed distribution. To better facilitate real-world scenarios, imbalanced SSL has recently received increasing attention. State-of-the-art imbalanced SSL methods (Kim et al., 2020; Wei et al., 2021; Lee et al., 2021) are build upon the pseudo-labeling and consistency regularization frameworks (Sohn et al., 2020; Xie et al., 2020a ) by augmenting them with additional modules that tackle specific imbalanced issues (e.g., using per-class balanced sampling (Lee et al., 2021; Wei et al., 2021) ). Critically, these methods still rely on confidence-based thresholding (Lee et al., 2013; Sohn et al., 2020; Xie et al., 2020a; Zhang et al., 2021) for pseudo-labeling, in which only the unlabeled samples whose predicted class confidence surpasses a very high threshold (e.g., 0.95) are pseudo-labeled for training. Confidence-based pseudo-labeling, despite its success in balanced SSL, faces two major drawbacks in the imbalanced, long-tailed setting. First, applying a high confidence threshold yields significantly 1

