ME-MOMENTUM: EXTRACTING HARD CONFIDENT EXAMPLES FROM NOISILY LABELED DATA

Abstract

Examples that are close to the decision boundary-that we term hard examples, are essential to shaping accurate classifiers. Extracting confident examples has been widely studied in the community of learning with noisy labels. However, it remains elusive how to extract hard confident examples from the noisy training data. In this paper, we propose a deep learning paradigm to solve this problem, which is built on the memorization effect of deep neural networks that they would first learn simple patterns, i.e., which are defined by these shared by multiple training examples. To extract hard confident examples that contain non-simple patterns and are entangled with the inaccurately labeled examples, we borrow the idea of momentum from physics. Specifically, we alternately update the confident examples and refine the classifier. Note that the extracted confident examples in the previous round can be exploited to learn a better classifier and that the better classifier will help identify better (and hard) confident examples. We call the approach the "Momentum of Memorization" (Me-Momentum). Empirical results on benchmark-simulated and real-world label-noise data illustrate the effectiveness of Me-Momentum for extracting hard confident examples, leading to better classification performance.

1. INTRODUCTION

As training datasets are growing big while accurately labeling them is often expensive or sometimes even infeasible, cheap datasets with label noise are ubiquitous in many real-world applications. Without any care, label noise will degenerate the performance of learning algorithms, especially for these based on deep neural networks (Zhang et al., 2017) . Learning with noisy labels (Angluin & Laird, 1988) aims to reduce the side-effect of label noise and therefore has become an important topic in machine learning. Existing methods for learning with noisy labels can be divided into two categories: algorithms that result in statistically consistent or inconsistent classifiers. Methods in the first category intent to design classifier-consistent algorithms (Zhang & Sabuncu, 2018; Kremer et al., 2018; Liu & Tao, 2016; Scott, 2015; Natarajan et al., 2013; Goldberger & Ben-Reuven, 2017; Patrini et al., 2017; Thekumparampil et al., 2018; Yu et al., 2018; Liu & Guo, 2020; Xu et al., 2019) , where classifiers learned by using the noisy data will statistically converge to the optimal classifiers defined by clean data. However, these methods rely heavily on the noise transition matrix (Liu & Tao, 2016; Patrini et al., 2017; Xia et al., 2019) . In real-world applications, it is hard to learn the instance-independent noise transition matrix (Cheng et al., 2020) . To be free of estimating the noise transition matrix, methods in the second category employ heuristics to reduce the side-effect of label noise (Yu et al., 2019; Han et al., 2018b; Malach & Shalev-Shwartz, 2017; Ren et al., 2018; Jiang et al., 2018; Ma et al., 2018; Kremer et al., 2018; Tanaka et al., 2018; Reed et al., 2015; Han et al., 2018a; Guo et al., 2018; Veit et al., 2017; Vahdat, 2017; Li et al., 2017; 2020) . These methods were reported to empirically work well, especially in the setting of instance-dependent label noise. One promising direction in the second category is to extract examples with clean labels-confident examples- (Northcutt et al., 2017; Cheng et al., 2020; Chen et al., 2019; Malach & Shalev-Shwartz, 2017; Han et al., 2018b; Yu et al., 2019; Thulasidasan et al., 2019; Nguyen et al., 2020; Dredze et al., 2008; Liu & Tao, 2016; Wang et al., 2017; Ren et al., 2018; Jiang et al., 2018) . The idea is that compared with the original noisy training data, the extracted examples are less noisy and thus

