THEORETICAL ANALYSIS OF SELF-TRAINING WITH DEEP NETWORKS ON UNLABELED DATA

Abstract

Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of self-training only applies to linear models. This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning. At the core of our analysis is a simple but realistic "expansion" assumption, which states that a lowprobability subset of the data must expand to a neighborhood with large probability relative to the subset. We also assume that neighborhoods of examples in different classes have minimal overlap. We prove that under these assumptions, the minimizers of population objectives based on self-training and input-consistency regularization will achieve high accuracy with respect to ground-truth labels. By using off-the-shelf generalization bounds, we immediately convert this result to sample complexity guarantees for neural nets that are polynomial in the margin and Lipschitzness. Our results help explain the empirical successes of recently proposed self-training algorithms which use input consistency regularization.

1. INTRODUCTION

Though supervised learning with neural networks has become standard and reliable, it still often requires massive labeled datasets. As labels can be expensive or difficult to obtain, leveraging unlabeled data in deep learning has become an active research area. Recent works in semi-supervised learning (Chapelle et al., 2010; Kingma et al., 2014; Kipf & Welling, 2016; Laine & Aila, 2016; Sohn et al., 2020; Xie et al., 2020) and unsupervised domain adaptation (Ben-David et al., 2010; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Hoffman et al., 2018; Shu et al., 2018; Zhang et al., 2019) leverage lots of unlabeled data as well as labeled data from the same distribution or a related distribution. Recent progress in unsupervised learning or representation learning (Hinton et al., 1999; Doersch et al., 2015; Gidaris et al., 2018; Misra & Maaten, 2020; Chen et al., 2020a; b; Grill et al., 2020) learns high-quality representations without using any labels. Self-training is a common algorithmic paradigm for leveraging unlabeled data with deep networks. Self-training methods train a model to fit pseudolabels, that is, predictions on unlabeled data made by a previously-learned model (Yarowsky, 1995; Grandvalet & Bengio, 2005; Lee, 2013) . Recent work also extends these methods to enforce stability of predictions under input transformations such as adversarial perturbations (Miyato et al., 2018) and data augmentation (Xie et al., 2019) . These approaches, known as input consistency regularization, have been successful in semi-supervised learning (Sohn et al., 2020; Xie et al., 2020) , unsupervised domain adaptation (French et al., 2017; Shu et al., 2018) , and unsupervised learning (Hu et al., 2017; Grill et al., 2020) . Despite the empirical successes, theoretical progress in understanding how to use unlabeled data has lagged. Whereas supervised learning is relatively well-understood, statistical tools for reasoning about unlabeled data are not as readily available. Around 25 years ago, Vapnik (1995) proposed the transductive SVM for unlabeled data, which can be viewed as an early version of self-training, yet there is little work showing that this method improves sample complexity (Derbeko et al., 2004) . Working with unlabeled data requires proper assumptions on the input distribution (Ben-David et al., 2008) . Recent papers (Carmon et al., 2019; Raghunathan et al., 2020; Chen et al., 2020c; Kumar et al., 2020; Oymak & Gulcu, 2020) analyze self-training in various settings, but mainly for linear models and often require that the data is Gaussian or near-Gaussian. Kumar et al. ( 2020) also analyze self-training in a setting where gradual domain shift occurs over multiple timesteps but assume a small Wasserstein distance bound on the shift between consecutive timesteps. Another line of work leverages unlabeled data using non-parametric methods, requiring unlabeled sample complexity that is exponential in dimension (Rigollet, 2007; Singh et al., 2009; Urner & Ben-David, 2013) . This paper provides a unified theoretical analysis of self-training with deep networks for semisupervised learning, unsupervised domain adaptation, and unsupervised learning. Under a simple and realistic expansion assumption on the data distribution, we show that self-training with input consistency regularization using a deep network can achieve high accuracy on true labels, using unlabeled sample size that is polynomial in the margin and Lipschitzness of the model. Our analysis provides theoretical intuition for recent empirically successful self-training algorithms which rely on input consistency regularization (Berthelot et al., 2019; Sohn et al., 2020; Xie et al., 2020) . Our expansion assumption intuitively states that the data distribution has good continuity within each class. Concretely, letting P i be the distribution of data conditioned on class i, expansion states that for small subset S of examples with class i, P i (neighborhood of S) ≥ cP i (S) (1.1) where and c > 1 is the expansion factor. The neighborhood will be defined to incorporate data augmentation, but for now can be thought of as a collection of points with a small 2 distance to S. This notion is an extension of the Cheeger constant (or isoperimetric or expansion constant) (Cheeger, 1969) which has been studied extensively in graph theory (Chung & Graham, 1997 ), combinatorial optimization (Mohar & Poljak, 1993; Raghavendra & Steurer, 2010 ), sampling (Kannan et al., 1995; Lovász & Vempala, 2007; Zhang et al., 2017) , and even in early versions of self-training (Balcan et al., 2005) for the co-training setting (Blum & Mitchell, 1998) . Expansion says that the manifold of each class has sufficient connectivity, as every subset S has a neighborhood larger than S. We give examples of distributions satisfying expansion in Section 3.1. We also require a separation condition stating that there are few neighboring pairs from different classes. Our algorithms leverage expansion by using input consistency regularization (Miyato et al., 2018; Xie et al., 2019) to encourage predictions of a classifier G to be consistent on neighboring examples: R(G) = E x [ max neighbor x 1(G(x) = G(x ))] (1.2) For unsupervised domain adaptation and semi-supervised learning, we analyze an algorithm which fits G to pseudolabels on unlabeled data while regularizing input consistency. Assuming expansion and separation, we prove that the fitted model will denoise the pseudolabels and achieve high accuracy on the true labels (Theorem 4.3). This explains the empirical phenomenon that self-training on pseudolabels often improves over the pseudolabeler, despite no access to true labels. For unsupervised learning, we consider finding a classifier G that minimizes the input consistency regularizer with the constraint that enough examples are assigned each label. In Theorem 3.6, we show that assuming expansion and separation, the learned classifier will have high accuracy in predicting true classes, up to a permutation of the labels (which can't be recovered without true labels). The main intuition of the theorems is as follows: input consistency regularization ensures that the model is locally consistent, and the expansion property magnifies the local consistency to global consistency within the same class. In the unsupervised domain adaptation setting, as shown in Figure 1 (right), the incorrectly pseudolabeled examples (the red area) are gradually denoised by their correctly pseudolabeled neighbors (the green area), whose probability mass is non-trivial (at least c -1 times the mass of the mistaken set by expansion). We note that expansion is only required on the population distribution, but self-training is performed on the empirical samples. Due to the extrapolation power of parametric methods, the local-to-global consistency effect of expansion occurs implicitly on the population. In contrast, nearest-neighbor methods would require expansion to occur explicitly on empirical samples, suffering the curse of dimensionality as a result. We provide more details below, and visualize this effect in Figure 1 (left). To our best knowledge, this paper gives the first analysis with polynomial sample complexity guarantees for deep neural net models for unsupervised learning, semi-supervised learning, and unsuper-

