WHEN SOURCE-FREE DOMAIN ADAPTATION MEETS LEARNING WITH NOISY LABELS

Abstract

Recent state-of-the-art source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in the feature space, which have succeeded in adapting the knowledge from source domain to unlabeled target domain without accessing the private source data. However, existing methods rely on the pseudo-labels generated by source models that can be noisy due to domain shift. In this paper, we study SFDA from the perspective of learning with label noise (LLN). Unlike the label noise in the conventional LLN scenario, we prove that the label noise in SFDA follows a different distribution assumption. We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA. Empirical evidence suggests that only marginal improvements are achieved when applying the existing LLN methods to solve the SFDA problem. On the other hand, although there exists a fundamental difference between the label noise in the two scenarios, we demonstrate theoretically that the early-time training phenomenon (ETP), which has been previously observed in conventional label noise settings, can also be observed in the SFDA problem. Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA.

1. INTRODUCTION

Deep learning demonstrates strong performance on various tasks across different fields. However, it is limited by the requirement of large-scale labeled and independent, and identically distributed (i.i.d.) data. Unsupervised domain adaptation (UDA) is thus proposed to mitigate the distribution shift between the labeled source and unlabeled target domain. In view of the importance of data privacy, it is crucial to be able to adapt a pre-trained source model to the unlabeled target domain without accessing the private source data, which is known as Source Free Domain Adaptation (SFDA). The current state-of-the-art SFDA methods (Liang et al., 2020; Yang et al., 2021a; b) mainly focus on learning meaningful cluster structures in the feature space, and the quality of the learned cluster structures hinges on the reliability of pseudo labels generated by the source model. Among these methods, SHOT (Liang et al., 2020) purifies pseudo labels of target data based on nearest centroids, and then the purified pseudo labels are used to guide the self-training. G-SFDA (Yang et al., 2021b) and NRC (Yang et al., 2021a) further refine pseudo labels by encouraging similar predictions to the data point and its neighbors. For a single target data point, when most of its neighbors are correctly predicted, these methods can provide an accurate pseudo label to the data point. However, as we illustrate the problem in Figure 1i(a-b ), when the majority of its neighbors are incorrectly predicted to a category, it will be assigned with an incorrect pseudo label, misleading the learning of cluster structures. The experimental result on VisDA (Peng et al., 2017) , shown in Figure 1ii , further verifies this phenomenon. By directly applying the pre-trained source model on each target domain instance < l a t e x i t s h a 1 _ b a s e 6 4 = " S L 1 5 3 E m R 9 z g u 8 F 8 d j g D n m v P e 4 U A = " > A A A B 7 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q B e h 6 M V j B d M W 2 l A 2 2 0 2 7 d L M J u x O h l P 4 G L x 4 U 8 e o P 8 u a / c d v m o K 0 P B h 7 v z T A z L 0 y l M O i 6 3 0 5 h b X 1 j c 6 u 4 X d r Z 3 d s / K B 8 e N U 2 S a c Z 9 l s h E t 0 N q u B S K + y h Q 8 n a q O Y 1 D y V v h 6 G 7 m t 5 6 4 N i J R j z h O e R D T g R K R Y B S t 5 O M N 9 r x e u e J W 3 T n I K v F y U o E c j V 7 5 q 9 t P W B Z z h U x S Y z q e m 2 I w o R o F k 3 x a 6 m a G p 5 S N 6 I B 3 L F U 0 5 i a Y z I + d k j O r 9 E m U a F s K y V z 9 P T G h s T H j O L S d M c W h W f Z m 4 n 9 e J 8 P o O p g I l W b I F V s s i j J J M C G z z 0 l f a M 5 Q j i 2 h T A t 7 K 2 F D q i l D m 0 / J h u A t v 7 x K m h d V 7 7 J a e 6 h V 6 r d 5 H E U 4 g V M 4 B w + u o A 7 3 0 A A f G A h 4 h l d 4 c 5 T z 4 r w 7 H 4 v W g p P P H M M f O J 8 / Y T K O a g = = < / l a t e x i t > t = t 1 Label Noise < l a t e x i t s h a 1 _ b a s e 6 4 = " n P n p m x E N y u u N u r H M a K i I E + / 9 t E M = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 I t Q 9 O K x o v 2 A N p T N d t M u 3 W z C 7 k Q o o T / B i w d F v P q L v P l v 3 L Y 5 a O u D g c d 7 M 8 z M C x I p D L r u t 1 N Y W V 1 b 3 y h u l r a 2 d 3 b 3 y v s H T R O n m v E G i 2 W s 2 w E 1 X A r F G y h Q 8 n a i O Y 0 C y V v B 6 H b q t 5 6 4 N i J W j z h O u B / R g R K h Y B S t 9 I D X b q 9 c c a v u D G S Z e D m p Q I 5 6 r / z V 7 c c s j b h C J q k x H c 9 N 0 M + o R s E k n 5 S 6 q e E J Z S M 6 4 B 1 L F Y 2 4 8 b P Z q R N y Y p U + C W N t S y G Z q b 8 n M h o Z M 4 4 C 2 x l R H J p F b y r + 5 3 V S D K / 8 T K g k R a 7 Y f F G Y S o I x m f 5 N + k J z h n J s C W V a 2 F s J G 1 J N G d p 0 S j Y E b / H l Z d I 8 q 3 o X 1 f P 7 8 0 r t J o + j C E d w D K f g w S X U 4 A 7 q 0 A A G A 3 i G V 3 h z p P P i v D s f 8 9 a C k 8 8 c w h 8 4 n z / T S Y 2 C < / l a t e x i t > t = 0 Source Model < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 w Q 1 N M 1 Z t 8 8 / H m 3 6 b P d b v G g P / d 0 = " > A A A B 9 X i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 B I v g q S Q i 6 k U o e v F Y w X 5 A G 8 t m M 2 m X b j Z h d 6 K W 0 P / h x Y M i X v 0 v 3 v w 3 b t s c t P X B w O O 9 G W b m + Y n g G h 3 n 2 y o s L a + s r h X X S x u b W 9 s 7 5 d 2 9 p o 5 T x a D B Y h G r t k 8 1 C C 6 h g R w F t B M F N P I F t P z h 9 c R v P Y D S P J Z 3 O E r A i 2 h f 8 p A z i k a 6 x 0 v s d R G e M A M Z j H v l i l N 1 p r A X i Z u T C s l R 7 5 W / u k H M 0 g g k M k G 1 7 r h O g l 5 G F X I m Y F z q p h o S y o a 0 D x 1 D J Y 1 A e 9 n 0 6 r F 9 Z J T A D m N l S q I 9 V X 9 P Z D T S e h T 5 p j O i O N D z 3 k T 8 z + u k G F 5 4 G Z d J i i D Z b F G Y C h t j e x K B H X A F D M X I E M o U N 7 f a b E A V Z W i C K p k Q 3 P m X F 0 n z p O q e V U 9 v T y u 1 q z y O I j k g h + S Y u O S c 1 M g N q Z M G Y U S R Z / J K 3 q x H 6 8 V 6 t z 5 m r Q U r n 9 k n f 2 B 9 / g A N 7 J L j < / l a t e x i t > The existing SFDA algorithms using the local cluster information cannot address label noise due to the unbounded label noise (Section 3). (c) We prove that ETP exists in SFDA, which can be leveraged to address the unbounded label noise (Section 4). (ii) Observed Label Noise Phenomena on VisDA dataset. (central instance), we collect its neighbors and evaluate their quality. We observed that for each class a large proportion of the neighbors are misleading (i.e., the neighbors' pseudo labels are different from the central instance's true label), some even with high confidence (e.g., the over-confident misleading neighbors whose prediction score is larger than 0.75). Based on this observation, we can conclude that: (1) the pseudo labels leveraged in current SFDA methods can be heavily noisy; (2) some pseudo-label purification methods utilized in SFDA, which severely rely on the quality of the pseudo label itself, will be affected by such label noise, and the prediction error will accumulate as the training progresses. More details can be found in Appendix A. In this paper, we address the aforementioned problem by formulating SFDA as learning with label noise (LLN). Unlike existing studies that heuristically rely on cluster structures or neighbors, we investigate the properties of label noise in SFDA and show that there is an intrinsic discrepancy between the SFDA and the LLN problems. Specifically, in conventional LLN scenarios, the label noise is generated by human annotators or image search engines (Patrini et al., 2017; Xiao et al., 2015; Xia et al., 2020a) , where the underlying distribution assumption is that the mislabeling rate for a sample is bounded. However, in the SFDA scenarios, the label noise is generated by the source model due to the distribution shift, where we prove that the mislabeling rate for a sample is much higher, and can approach 1. We term the former label noise in LLN as bounded label noise and the latter label noise in SFDA as unbounded label noise. Moreover, we theoretically show that most existing LLN methods, which rely on bounded label noise assumption, are unable to address the label noise in SFDA due to the fundamental difference (Section 3). To this end, we leverage early-time training phenomenon (ETP) in LLN to address the unbounded label noise and to improve the efficiency of existing SFDA algorithms. Specifically, ETP indicates that classifiers can predict mislabeled samples with relatively high accuracy during the early learning phase before they start to memorize the mislabeled data (Liu et al., 2020) . Although ETP has been previously observed in, it has only been studied in the bounded random label noise in the conventional LLN scenarios. In this work, we theoretically and empirically show that ETP still exists in the unbounded label noise scenario of SFDA. Moreover, we also empirically justify that existing SFDA algorithms can be substantially improved by leveraging ETP, which opens up a new avenue for SFDA. As an instantiation, we incorporate a simple early learning regularization (ELR) term (Liu et al., 2020) with existing SFDA objective functions, achieving consistent improvements on four different SFDA benchmark datasets. As a comparison, we also apply other existing LLN methods, including Generalized Cross Entropy (GCE) (Zhang & Sabuncu, 2018) , Symmetric Cross Entropy Learning (SL) (Wang et al., 2019b) , Generalized Jensen-Shannon Divergence (GJS) (Englesson & Azizpour, 2021) and Progressive Label Correction (PLC) (Zhang et al., 2021) , to SFDA. Our empirical evidence shows that they are inappropriate for addressing the label noise in SFDA. This is also consistent with our theoretical results (Section 4). Our main contribution can be summarized as: (1) We establish the connection between the SFDA and the LLN. Compared with the conventional LLN problem that assumes bounded label noise, the problem in SFDA can be viewed as the problem of LLN with the unbounded label noise. (2) We theoretically and empirically justify that ETP exists in the unbounded label noise scenario. On the algorithmic side, we instantiate our analysis by simply adding a regularization term into the SFDA objective functions. (3) We conduct extensive experiments to show that ETP can be utilized to improve many existing SFDA algorithms by a large margin across multiple SFDA benchmarks.

2. RELATED WORK

Source-free domain adaptation. Recently, SFDA are studied for data privacy. The first branch of research is to leverage the target pseudo labels to conduct self-training to implicitly achieve adaptation (Liang et al., 2021; Tanwisuth et al., 2021; Ahmed et al., 2021; Yang et al., 2021b) . SHOT (Liang et al., 2020) introduces k-means clustering and mutual information maximization strategy for self-training. NRC (Yang et al., 2021a) further investigates the neighbors of target clusters to improve the accuracy of pseudo labels. These studies more or less involve pseudo-label purification processes, but they are primarily heuristic algorithms and suffer from the previously mentioned label noise accumulation problem. The other branch is to utilize the generative model to synthesize target-style training data (Qiu et al., 2021; Liu et al., 2021b) . Some methods also explore the SFDA algorithms in various settings. USFDA (Kundu et al., 2020a) and FS (Kundu et al., 2020b) design methods for universal and open-set UDA. In this paper, we regard SFDA as the LLN problem. We aim to explore what category of noisy labels exists in SFDA and to ameliorate such label noise to improve the performance of current SFDA algorithms. Learning with label noise. Existing methods for training neural networks with label noise focus on symmetric, asymmetric, and instance-dependent label noise. For example, a branch of research focuses on leveraging noise-robust loss functions to cope with the symmetric and asymmetric noise, including GCE (Zhang & Sabuncu, 2018) , SL (Wang et al., 2019b) , NCE (Ma et al., 2020) , and GJS (Englesson & Azizpour, 2021) , which have been proven effective in bounded label noise. On the other hand, CORES (Cheng et al., 2020) and CAL (Zhu et al., 2021) are shown useful in mitigating instancedependent label noise. These methods are only tailed to conventional LLN settings. Recently, Liu et al. (2020) has studied early-time training phenomenon (ETP) in conventional label noise scenarios and proposes a regularization term ELR to exploit the benefits of ETP. PCL (Zhang et al., 2021) is another conventional LLN algorithm utilizing ETP, but it cannot maintain the exploit of ETP in SFDA as memorizing noisy labels is much faster in SFDA. Our contributions are: (1) We theoretically and empirically study ETP in the SFDA scenario. (2) Based on an in depth analysis of many existing LLN methods (Zhang & Sabuncu, 2018; Wang et al., 2019b; Englesson & Azizpour, 2021; Zhang et al., 2021) , we demonstrate that ELR is useful for many SFDA problems.

3. LABEL NOISE IN SFDA

The presence of label noise on training datasets has been shown to degrade the model performance (Malach & Shalev-Shwartz, 2017; Han et al., 2018) . In SFDA, existing algorithms rely on pseudolabels produced by the source model, which are inevitably noisy due to the domain shift. The SFDA methods such as Liang et al. (2020) ; Yang et al. (2021a; b) cannot tackle the situation when some target samples and their neighbors are all incorrectly predicted by the source model. In this section, we formulate the SFDA as the problem of LLN to address this issue. We assume that the source domain D S and the target domain D T follow two different underlying distributions over X × Y, where X and Y are respectively the input and label spaces. In the SFDA setting, we aim to learn a target classifier f (x; θ) : X → Y only with a pre-trained model f S (x) on D S and a set of unlabeled target domain observations drawn from D T . We regard the incorrectly assigned pseudo-labels as noisy labels. Unlike the "bounded label noise" assumption in the conventional LLN domain, we will show that the label noise in SFDA is unbounded. We further prove that most existing LLN methods that rely on the bounded assumption cannot address the label noise in SFDA due to the difference. Label noise in conventional LLN settings: In conventional label noise settings, the injected noisy labels are collected by either human annotators or image search engines (Lee et al., 2018; Li et al., 2017; Xiao et al., 2015) . The label noise is usually assumed to be either independent of instances (i.e., symmetric label noise or asymmetric label noise) (Patrini et al., 2017; Liu & Tao, 2015; Xu et al., 2019b) or dependent of instances (i.e., instance-dependent label noise) (Berthon et al., 2021; Xia et al., 2020b) . The underling assumption for them is that a sample x has the highest probability of being in the correct class y, i.e., Pr[ Ỹ = i|Y = i, X = x] > Pr[ Ỹ = j|Y = i, X = x], ∀x ∈ X , i ̸ = j, where Ỹ is the noisy label and Y is the ground-truth label for input X. Equivalently, it assumes a bounded noise rate. For example, given an image to annotate, the mislabeling rate for the image is bounded by a small number, which is realistic in conventional LLN settings (Xia et al., 2020b; Cheng et al., 2020) . When the label noise is generated by the source model, the underlying assumption of these types of label noise does not hold. Label noise in SFDA: As for the label noise generated by the source model, mislabeling rate for an image can approach 1, that is, Pr[ Ỹ = j|Y = i, X = x] → 1, ∃S ⊂ X , ∀x ∈ S, i ̸ = j. To understand that the label noise in SFDA is unbounded, we consider a two-component Multivariate Gaussian mixture distribution with equal priors for both domains. Let the first component (y = 1) of the source domain distribution D S be N (µ 1 , σ 2 I d ), and the second component (y = -1) of D S be N (µ 2 , σ 2 I d ), where µ 1 , µ 2 ∈ R d and I d ∈ R d×d . For the target domain distribution D T , let the first component (y = 1) of D T be N (µ 1 + ∆, σ 2 I d ), and the second component (y = -1) of D T be N (µ 2 + ∆, σ 2 I d ), where ∆ ∈ R d is the shift of the two domains. Notice that the domain shift considered is a general shift and it has been studied in Stojanov et al. (2021) ; Zhao et al. (2019) , where we also illustrate the domain shift in Figure 9 in supplementary material. Let f S be the optimal source classifier. First, we build the relationship between the mislabeling rate for target data and the domain shift: Pr (x,y)∼D T [f S (x) ̸ = y] = 1 2 Φ(- d 1 σ ) + 1 2 Φ(- d 2 σ ), where d 1 = µ2-µ1 2 -c sign( µ2-µ1 2 -∥c∥), d 2 = µ2-µ1 2 + c , c = α(µ 2 -µ 1 ), α = ∆ ⊤ (µ2-µ1) ∥µ2-µ1∥ 2 is the magnitude of domain shift, and Φ is the standard normal cumulative distribution function. Eq. ( 1) shows that the magnitude of the domain shift inherently controls the mislabeling error for target data. This mislabeling rate increases as the magnitude of the domain shift increases. We defer the proof and details to Appendix B. More importantly, we characterize that the label noise is unbounded among these mislabeled samples. Theorem 3.1. Without loss of generality, we assume that the ∆ is positively correlated with the vector µ 2µ 1 , i.e., ∆ ⊤ (µ 2µ 1 ) > 0. For (x, y) ∼ D T , if x ∈ R, then Pr[f S (x) ̸ = y] ≥ 1 -δ, where δ ∈ (0, 1) (i.e., δ = 0.01 ), R = R 1 R 2 , R 1 = {x : ∥x -µ 1 -∆∥ ≤ σ( √ d 2 - log 1-δ δ √ d )}, and R 2 = {x : x ⊤ 1 d > (σd + 2µ ⊤ 1 1 d )/2}. Meanwhile, R is non-empty when α > (log 1-δ δ )/d, where α = ∆ ⊤ (µ2-µ1) ∥µ2-µ1∥ 2 > 0 is the magnitude of the domain shift along the direction µ 2µ 1 . Conventional LLN methods assume that the label noise is bounded: Pr[f H (x) ̸ = y] < m, ∀(x, y) ∼ D T , where f H is the labeling function, and m = 0.5 if the number of clean samples of each component are the same (Cheng et al., 2020) . However, Theorem 3.1 indicates that the label noise generated by the source model is unbounded for any x ∈ R. In practice, region R is non-empty as neural networks are usually trained on high dimensional data such that d ≫ 1, so α > (log 1-δ δ )/d → 0 is easy to satisfy. The probability measure on R = R 1 R 2 (i.e., Pr (x,y)∼D T [x ∈ R]) increases as the magnitude of the domain shift α increases, meaning more data points contradict the conventional LLN assumption. More details can be found in Appendix C. Given that the unbounded label noise exists in SFDA, the following Lemma establishes that many existing LLN methods (Wang et al., 2019b; Ghosh et al., 2017; Englesson & Azizpour, 2021; Ma et al., 2020) , which rely on the bounded assumption, are not noise tolerant in SFDA. Lemma 3.2. Let the risk of the function h : X → Y under the clean data be R(h) = E x,y [ℓ LLN (h(x), y)], and the risk of h under the noisy data be R(h) = E x,ỹ [ℓ LLN (h(x), ỹ)], where the noisy data follows the unbounded assumption, i.e., Pr[ỹ ̸ = y|x ∈ R] = 1δ for a subset R ⊂ X and δ ∈ (0, 1). Then the global minimizer h⋆ of R(h) disagrees with the global minimizer h ⋆ of R(h) on data points x ∈ R with a high probability at least 1δ. We denote ℓ LLN by the existing noise-robust loss based LLN methods in Wang et al. (2019b) ; Ghosh et al. (2017) ; Englesson & Azizpour (2021) ; Ma et al. (2020) . When the noisy data follows the bounded assumption, these methods are noise tolerant as the minimizer h⋆ converges to the minimizer h ⋆ with a high probability. We defer the details and proof of the related LLN methods to Appendix D.

4. LEARNING WITH LABEL NOISE IN SFDA

Given a fundamental difference between the label noise in SFDA and the label noise in conventional LLN scenarios, existing LLN methods, whose underlying assumption is bounded label noise, cannot be applied to solve the label noise in SFDA. This section focuses on investigating how to address the unbounded label noise in SFDA. Motivated by the recent studies Liu et al. (2020) ; Arpit et al. (2017) , which observed an early-time training phenomenon (ETP) on noisy datasets with bounded random label noise, we find that ETP does not rely on the bounded random label noise assumption, and it can be generalized to the unbounded label noise in SFDA. ETP describes the training dynamics of the classifier that preferentially fits the clean samples and therefore has higher prediction accuracy for mislabeled samples during the early-training stage. Such training characteristics can be very beneficial for SFDA problems in which we only have access to the source model and the highly noisy target data. To theoretically prove ETP in the presence of unbounded label noise, we first describe the problem setup. We still consider a two-component Gaussian mixture distribution with equal priors. We denote y by the true label for x, and assume it is a balanced sample from {-1, +1}. The instance x is sampled from the distribution N (yµ, σ1 d ), where ∥µ∥ = 1. We denote ỹ by the noisy label for x. We observe that the label noise generated by the source model is close to the decision boundary revealed in Theorem 3.1. So, to assign the noisy labels, we let ỹ = yβ(x, y), where β(x, y) = sign(1{yx ⊤ µ > r} -0.5) is the label flipping function, and r controls the mislabeling rate. If β(x, y) < 1, then the data point x is mislabeled. Meanwhile, the label noise is unbounded by adopting the label flipping function β(x, y): Pr[ỹ ̸ = y|yx ⊤ µ ≤ r] = 1, where R = {x : yx ⊤ µ ≤ r}. We study the early-time training dynamics of gradient descent on the linear classifier. The parameter θ is learned over the unbounded label noise data {x i , ỹi } n i=1 with the following logistic loss function: L(θ t+1 ) = 1 n n i=1 log 1 + exp -ỹ i θ ⊤ t+1 x i , where θ t+1 = θ t -η∇ θ L(θ t ), and η is the learning rate. Then the following theorem builds the connection between the prediction accuracy for mislabeled samples at an early-training time T . Theorem 4.1. Let B = {x : ỹ ̸ = y} be a set of mislabeled samples. Let κ(B; θ) be the prediction accuracy calculated by the ground-truth labels and the predicted labels by the classifier with parameter θ for mislabeled samples. If at most half of the samples are mislabeled (r < 1), then there exists a proper time T and a constant c 0 > 0 such that for any 0 < σ < c 0 and n → ∞, with probability 1o p (1): κ(B; θ T ) ≥ 1 -exp{- 1 200 g(σ) 2 }, where g(σ) = Erf[ 1-r √ 2σ ] 2(1+2σ)σ + exp (- (r-1) 2 2σ 2 ) √ 2π(1+2σ) > 0 is a monotone decreasing function that g(σ) → ∞ as σ → 0, and Erf[x] = 2 √ π x 0 e -t 2 dt. The proof is provided in Appendix E. Compared to ETP found in Liu et al. (2020) , where the label noise is assumed to be bounded, Theorem 4.1 presents that ETP also exists even though the label noise is unbounded. At a proper time T, the classifier trained by the gradient descent algorithm can provide accurate predictions for mislabeled samples, where its accuracy is lower bounded by a function of the variance of clusters σ. When σ → 0, the predictions of all mislabeled samples equal to their ground-truth labels (i.e., κ(B; θ T ) → 1). When the classifier is trained for a sufficiently long time, it will gradually memorize mislabeled data. The predictions of mislabeled samples are equivalent to their incorrect labels instead of their ground-truth labels (Liu et al., 2020; Maennel et al., 2020) . Based on these insights, the memorization of mislabeled data can be alleviated by leveraging their predicted labels during the early-training time. To leverage the predictions during the early-training time, we adopt a recently established method, early learning regularization (ELR) (Liu et al., 2020) , which encourages model predictions to stick to the early-time predictions for x. Since ETP exists in the scenarios of the unbounded label noise, ELR can be applied to solve the label noise in SFDA. The regularization is given by: where we overload f (x; θ t ) to be the probabilistic output for the sample x, and ȳt = β ȳt-1 + (1β)f (x; θ t ) is the moving average prediction for x, where β is a hyperparameter. To see how ELR prevents the model from memorizing the label noise, we calculate the gradient of Eq. ( 4) with respect to f (x; θ t ), which is given by: L ELR (θ t ) = log(1 -ȳ⊤ t f (x; θ t )), dL ELR (θ t ) df (x; θ t ) = - ȳt 1 -ȳ⊤ t f (x; θ t ) . Note that minimizing Eq. ( 4) forces f (x; θ t ) to close to ȳt . When ȳt is aligned better with f (x; θ t ), the magnitude of the gradient becomes larger. It makes the gradient of aligning f (x; θ t ) with ȳt overwhelm the gradient of other loss terms that align f (x; θ t ) with noisy labels. As the training progresses, the moving averaged predictions ȳt for target samples gradually approach their groundtruth labels till the time T . Therefore, Eq. ( 4) prevents the model from memorizing the label noise by forcing the model predictions to stay close to these moving averaged predictions ȳt , which are very likely to be ground-truth labels. Some existing LLN methods propose to assign pseudo labels to data or require two-stage training for label noise (Cheng et al., 2020; Zhu et al., 2021; Zhang et al., 2021) . Unlike these LLN methods, Eq. ( 4) can be easily embedded into any existing SFDA algorithms without conflict. The overall objective function is given by: L = L SFDA + λL ELR , where L SFDA is any SFDA objective function, and λ is a hyperparameter. Empirical Observations on Real-World Datasets. We empirically verify that target classifiers have higher prediction accuracy for target data during the early training and adaptation stage. We propose leveraging this benefit to prevent the classifier from memorizing the noisy labels. The observations are shown in Figure 2 . The parameters of classifiers are initialized by source models. Labels of target data are annotated by the initialized classifiers. We train the target classifiers on target data with the standard cross-entropy (CE) loss and the generalized cross-entropy (GCE) loss, a well-known noise-robust loss widely leveraged in bounded LLN scenarios. The solid green, orange and blue lines represent the training accuracy of optimizing the classifiers with CE loss, GCE loss, and ELR loss, respectively. The dotted red lines represent the labeling accuracy of the initialized classifiers. Considering that the classifiers memorize the unbounded label noise very fast, we evaluate the prediction accuracy on target data every batch for the first 90 steps. After 90 steps, we evaluate the prediction accuracy for every 0.33 epoch. The green lines show that ETP exists in SFDA, which is consistent with our theoretical result. Meanwhile, in all scenarios, green and orange lines show that classifiers provide higher prediction accuracy during the first a few iterations. After a few iterations, they start to memorize the label noise even with noise-robust loss (e.g., GCE). Eventually, the classifiers are expected to memorize the whole datasets. For conventional LLN settings, it has been empirically verified that it takes a much longer time before classifiers start memorizing the label noise (Liu et al., 2020; Xia et al., 2020a) . We provide further analysis in Appendix H. We highlight 

5. EXPERIMENTS

We aim to improve the efficiency of existing SFDA algorithms by using ELR to leverage ETP. We evaluate the performance on four different SFDA benchmark datasets: Office-31 (Saenko et al., 2010) , Office-Home (Venkateswara et al., 2017) , VisDA (Peng et al., 2017) and DomainNet (Peng et al., 2019) . Due to the limited space, the results on the dataset Office-31 and additional experimental details are provided in Appendix G. Evaluation. We incorporate ELR into three existing baseline methods: SHOT (Liang et al., 2020) , G-SFDA (Zhang & Sabuncu, 2018) , and NRC (Yang et al., 2021a) . SHOT uses k-means clustering and mutual information maximization strategy to train the representation network while freezing the final linear layer. G-SFDA aims to cluster target data with similar neighbors and attempts to maintain the source domain performance. NRC also explores the neighbors of target data by graph-based methods. ELR can be easily embedded into these methods by simply adding the regularization term into the loss function to optimize without affecting existing SFDA frameworks. We average the results based on three random runs. Results. Tables 1 2 3 4 show the results before/after leveraging the early-time training phenomenon, where Table 4 is shown in Appendix G. Among these tables, the top part shows the results of conventional UDA methods, and the bottom part shows the results of SFDA methods. In the tables, we use SF to indicate whether the method is source free or not. We use Source Only + ELR to indicate ELR with self-training. The results show that ELR itself can boost the performances. As existing SFDA methods are not able to address unbounded label noise, incorporating ELR into these SFDA methods can further boost the performance. The four datasets, including all 31 pairs (e.g., A → D) of tasks, show better performance after solving the unbounded label noise problem using the early-time training phenomenon. Meanwhile, solving the unbounded label noise on existing SFDA methods achieves state-of-the-art on all benchmark datasets. These SFDA methods also outperform most methods that need to access source data. Analysis about hyperparameters β and λ. The hyperparameter β is chosen from {0.5, 0.6, 0.7, 0.8, 0.9, 0.99}, and λ is chosen from {1, 3, 7, 12, 25}. We conduct the sensitivity study on hyperparameters of ELR on the DomainNet dataset, which is shown in Figure 3(a-b ). In each Figure, the study is conducted by fixing the other hyperparameter to the optimal one. The performance is robust to the hyperparameter β except β = 0.99. When β = 0.99, classifiers are sensitive to changes in learning curves. Thus, the performance degrades since the learning curves change quickly in the unbounded label noise scenarios. Meanwhile, the performance is also robust to the hyperparameter λ except when λ becomes too large. The hyperparameter λ is to balance the effects of existing SFDA algorithms and the effects of ELR. As we indicated in Tables 1 2 3 4 , barely using ELR to address the SFDA problem is not comparable to these SFDA methods. Hence, a large value of λ makes neural networks neglect the effects of these SFDA methods, leading to degraded performance. As we formulate the SFDA as the problem of LLN, it is of interest to discuss some existing LLN methods. We mainly discuss existing LLN methods that can be easily embedded into the current SFDA algorithms. Based on this principle, we choose GCE (Zhang & Sabuncu, 2018) , SL (Wang et al., 2019b) and GJS (Englesson & Azizpour, 2021) that have been theoretically proved to be robust to symmetric and asymmetric label noise, which are bounded label noise. We highlight that a more recent method GJS outperforms ELR in real-world noisy datasets. However, we will show that GJS is inferior to ELR in SFDA scenarios, because the underlying assumption for GJS does not hold in SFDA. Besides ELR, which leverages ETP, PCL is another method to leverage the same phenomenon, but we will show that it is also inappropriate for SFDA. Y i E g K D r p + o C s Z 1 e c q j M D X S Z u T i o k R 7 1 n f 3 X 7 M U 9 D i J B L p n X H d R L 0 M q Z Q c A m T U j f V k D A + Y g P o G B q Z b d r L Z k 9 M 6 L F R + j S I l a k I 6 U z 9 P Z G x U O t x 6 J v O k O F Q L 3 p T 8 T + v k 2 J w 6 W U i S l K E i M 8 X B a m k G N N p I r Q v F H C U Y 0 M Y V 8 L c S v n Q R M F N E r p k Q n A X X Y d B 1 v 5 2 l 5 Z X V t f X c R n 5 z a 3 t n t 7 C 3 X z d x q j n U e C x j 3 f S Z A S k U 1 F C g h G a i g U W + h I Y / u B 7 7 j X v Q R s T q D o c J d C L W U y I U n K G V u o W j N s I D Z n 1 r 6 Y R p F g G C p q O 2 t C M C 1 i 0 U 3 Z I 7 A V 0 k 3 o w U y Q z V b u G r H c Q 8 j U A h l 8 y Y l u c m 2 M m Y R s E l j P L t 1 E D C + I D 1 o G W p s v t M J 5 u c M a I n V g l o G G v 7 F N K J + r s j Y 5 E x w 8 i 3 l R H D v p n 3 x u J / X i v F 8 L K T C Z W To show the effects of the existing LLN methods under the unbounded label noise, we test these LLN methods on various SFDA datasets with target data whose labels are generated by source models. As shown in Figure 4 , GCE, SL, GJS, and PCL are better than CE but still not comparable to ELR. Our analysis indicates that ELR follows the principle of ETP, which is theoretically justified in SFDA scenarios by our Theorem 3.1. Methods GCE, SL, and GJS follow the bounded label noise assumption, which does not hold in SFDA. Hence, they perform worse than ELR in SFDA, even though GJS outperforms ELR in conventional LLN scenarios. PCL (Zhang et al., 2021) utilizes ETP to purify noisy labels of target data, but it performs significantly worse than ELR. As the memorization speed of the unbounded label noise is very fast, and classifiers memorize noisy labels within a few iterations (shown in Figure 2 ), purifying noisy labels every epoch is inappropriate for SFDA. However, we notice that PCL performs relatively better on DomainNet than on other datasets. The reason behind it is that the memorization speed in the DomainNet dataset is relatively slow than other datasets, which is shown in Figure 2 . In conventional LLN scenarios, PCL does not suffer from the issue since the memorization speed is much lower than the conventional LLN scenarios. In Figure 3 (c), we also evaluate the performance by incorporating the existing LLN methods into the SFDA algorithms SHOT and NRC. Since PCL and SHOT assign pseudo labels to target data, PCL is incompatible with some existing SFDA methods and cannot be easily embedded into some SFDA algorithms. Hence, we only embed GCE, SL, GJS, and ELR into the SFDA algorithms. The figure illustrates that ELR still performs better than other LLN methods when incorporated into SHOT and NRC. We also notice that GCE, SL, and GJS provide marginal improvement to the vanilla SHOT and NRC methods. We think the label noise in SFDA datasets is the hybrid noise that consists of both bounded label noise and unbounded label noise due to the non-linearity of neural networks. The GCE, SL, and GJS can address the bounded label noise, while ELR can address both bounded and unbounded label noise. Therefore, these experiments demonstrate that using ELR to leverage ETP can successfully address the unbounded label noise in SFDA.

6. CONCLUSION

In this paper, we study SFDA from a new perspective of LLN by theoretically showing that SFDA can be viewed as the problem of LLN with the unbounded label noise. Under this assumption, we rigorously justify that robust loss functions are not able to address the memorization issues of unbounded label noise. Meanwhile, based on this assumption, we further theoretically and empirically analyze the learning behavior of models during the early-time training stage and find that ETP can benifit the SFDA problems. Through extensive experiments across multiple datasets, we show that ETP can be exploited by ELR to improve prediction performance, and it can also be used to enhance existing SFDA algorithms.

A NEIGHBORS LABEL NOISE OBSERVATIONS ON REAL-WORLD DATASETS

This section provides more observed results and explanations of Neighbors' label noise during the Source-Free Domain Adaptation process on real-world datasets. Figure 7 : Neighbors Label Noise Analysis on Office-Home Currently, most SFDA methods inevitably leverage the pseudo-labels for self-supervised learning or to learn the cluster structure of the target data in the feature space, in order to realize the domain adaptation goal. However, the pseudo labels generated by the source domain are usually noisy and of poor quality due to the domain distribution shift. Some neighborhood-based heuristic methods (Yang et al., 2021a; b) have been proposed to purify these target domain pseudo labels, which use the pseudo label of neighbors in the feature space to correct and reassign the central data's pseudo label. In fact, such methods rely on a strong assumption: a relatively high quality of the neighbors' pseudo label. However, in our experimental observations, we find that at the very beginning of the adaptation process, the similarity of two data points in the feature space can not fully represent their label space's connection. Furthermore, such methods are easy to provide useless and noisy prediction information for the central data. We will show some statistical results on VisDA and Office-Home, these two real-world datasets. Following the neighborhood construction method in Yang et al. (2021a; b) , we use the pre-trained source model to infer the target data, extract the feature space outputs and get the prediction results. We use the cosine similarity on the feature space to find the top k similar neighbors (e.g., k = 2) for each data point (named as the central data point). Then, we collect the neighbors regarding the ground truth label of central data points and study the neighbor's quality for each class. Neighbors who do not belong to the correct category We define the neighbors who do not belong to the same category as its central data point as False Neighbor, which means their ground-truth labels are not the same: Y neighbor ̸ = Y central . And the results of VisDA (train → validation) and Office-Home (Pr → Cl) datasets are shown in Figure 6 and Figure 5 .

Neighbors who can not provide useful prediction information

We further study the prediction information provided by such neighbors. Regardless of their true category properties, we consider neighbors whose Predicted Label is the same as the Ground Truth Label of the central data point to be Useful Neighbors; otherwise, they are Misleading Neighbors, as they can not provide the expected useful prediction information. We denote the Misleading Neighbors Ratio as the proportion of noisy neighbors among all neighbors for each class. Besides, as some methods heuristically utilize the predicted logits as the predicted probability or confidence score in the pseudo label purification process, we further study the Over-Confident Misleading Neighbors Ratio for each class. We defined the over-confident misleading neighbors ratio as the number of over-confident misleading neighbors (misleading neighbors with a high predicted logit, larger than 0.75) divided by the number of all neighbors per class. The results on VisDA and Office-Home are shown in Figure 1ii and Figure 7 . We want to clarify that the above exploratory experiment results can only reflect the phenomenon of unbounded noise in SFDA to some extent: the set of over-confidence misleading neighbors is non-empty can correspond, to some extent, to the fact that R is non-empty proved in Theorem 3.1; but the definition of misleading neighbors does not rigorously satisfies the definition of unbounded label noise.

B RELATIONSHIP BETWEEN MISLABELING ERROR AND DOMAIN SHIFT

In this part, we focus on explaining the relationship between the label noise and the domain shift, as illustrated in Figure 9 . The following theorem characterizes the relationship between the labeling error and the domain shift. Theorem B.1. Without loss of generality, we assume that the ∆ is positively correlated with the vector µ 2µ 1 , i.e., ∆ ⊤ (µ 2µ 1 ) > 0. Let f S be the Bayes optimal classifier for the source domain S. Then Pr (x,y)∼D T [f S (x) ̸ = y] = 1 2 Φ(- d 1 σ ) + 1 2 Φ(- d 2 σ ), where d 1 = µ2-µ1 2 -c sign( µ2-µ1 2 -∥c∥), d 2 = µ2-µ1 2 + c , c = (µ 2 -µ 1 ) ∆ ⊤ (µ2-µ1) ∥µ2-µ1∥ 2 , and Φ is the standard normal cumulative distribution function. Theorem B.1 indicates that the labeling error for the target domain can be represented by a function of the domain shift ∆, which can be shown numerically in Figure 8 . The projection of the domain shift ∆ on the vector µ 2µ 1 is given by c. Since c is on the direction of µ 2µ 1 , c can also be represented by α(µ 2µ 1 ), where α ∈ R characterizes the magnitude of the domain shift. More specifically, in Figure 8 , we present the relationship between the mislabeling rate and α for all possible ∆. When ∆ is positively correlated with µ 2µ 1 (assumption in Theorem B.1), we have α > 0, and when ∆ is negatively correlated with µ 2µ 1 , we obtain α < 0. In both situations, we can observe that the labeling error increases with the absolute value of α increasing, which implies that the more severe the domain shift is, the greater the mislabeling error will be obtained. Besides, we note that when the source and target domains are the same, the mislabeling error in Eq. ( 6) is minimized and degraded to the Bayes error, which cannot be reduced (Fukunaga, 2013) . This corresponds to the situation when ∆ is perpendicular to µ 2µ 1 , c = 0, and α = 0 shown in Figure 8 .  Since the distributions of the two components with the same priors for the source domain are given by N (µ 1 , σ 2 I d ) and N (µ 2 , σ 2 I d ), respectively. Based on Bayes' rule, Eq. ( 7) is equivalent to Solving the left hand side of Eq. ( 8) by using the knowledge of two multivariate Gaussian distributions, we get log Pr[X = x|y = 1] Pr[X = x|y = -1] > 0 (8) -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 α = ∆ T (µ 2 -µ 1 ) ||µ 2 -µ 1 || 2 = sign(c) ||c|| h S (x) := log Pr[X = x|y = 1] Pr[X = x|y = -1] = x ⊤ (µ 1 -µ 2 ) σ 2 - ∥µ 1 ∥ 2 -∥µ 2 ∥ 2 2σ 2 . ( ) So f S predicts x to the first component when h S (x) > 0 and f S predicts x to the second component when h S (x) ≤ 0 The decision boundary is z such that h S (z) = 0. When there is no domain shift ∆ = 0, we have D S = D T , and the mislabeling rate is the Bayes error, which is given by: Pr (x,y)∼D S [f S (x) ̸ = y] = 1 2 Pr x∼N (µ1,σ 2 I d ) [h S (x) < 0|y = 1] + 1 2 Pr x∼N (µ2,σ 2 I d ) [h S (x) > 0|y = -1] (10) We first study the first term in Eq. ( 10): Pr x∼N (µ1,σ 2 I d ) [h S (x) < 0|y = 1] = • • • {x|x ⊤ (µ1-µ2)< ∥µ 1 ∥ 2 -∥µ 2 ∥ 2 2 } 1 (2πσ 2 ) d 2 exp - ∥x -µ 1 ∥ 2 2σ 2 dx 1 dx 2 • • • dx d = • • • {x|-∞<x1,x2,...,x d-1 <∞,d0<x d } 1 (2πσ 2 ) d 2 exp - d i=1 x 2 i 2σ 2 dx 1 dx 2 • • • dx d = ∞ d0 1 2πσ 2 exp - x 2 d 2σ 2 dx d =Φ(- d 0 σ ), where the second equality is because of the rotationally symmetric property for isotropic Gaussian random vectors, Φ is the cumulative distribution function of the standard Gaussian distribution, and d 0 = ∥(µ 2µ 1 )/2∥. Applying the similar mathematical steps for the second term in Eq. ( 10), and take them into Eq. (10): Pr (x,y)∼D S [f S (x) ̸ = y] = Φ(- ∥µ 2 -µ 1 ∥ 2σ ). ( ) When there is no domain shift, the labeling error is the Bayes error, which is expressed by Eq. ( 11). Then we consider the case when ∆ ̸ = 0. The distributions of the first and the second component are N (µ 1 + ∆, σ 2 I d ) and N (µ 2 + ∆, σ 2 I d ), respectively. Notice that the decision boundary z is the affine hyperplane. Any shift paralleled to this affine hyperplane will not affect the final component predictions. The domain shift ∆ can be decomposed into the sum of two vectors: the one is paralleled to this affine hyperplane, and another is perpendicular to the hyperplane. It is straightforward to verify that µ 2µ 1 is perpendicular to the hyperplane. Thus, we project the domain shift ∆ onto the vector µ 2µ 1 to get the component of ∆ that is perpendicular to the hyperplane, which is given by: U i S Y D K f V q L X Q l z r h j 3 F k 1 b d g m L A = " > A A A B 9 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P a U D a b a b t 0 s 4 m 7 m 0 I J / R 1 e P C j i 1 R / j z X / j t s 1 B W x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U 1 H G q G D Z Y L G L V D q h G w S U 2 D D c C 2 4 l C G g U C W 8 H o b u a 3 x q g 0 j + W j m S T o R 3 Q g e Z 8 z a q z k d 1 M Z o g o U Z Z h N e + W K W 3 X n I K v E y 0 k F c t R 7 5 a 9 u G L M 0 Q m m Y o F p 3 P D c x f k a V 4 U z g t N R N N S a U j e g A O 5 Z K G q H 2 s / n R U 3 J m l Z D 0 Y 2 V L G j J X f 0 9 k N N J 6 E g W 2 M 6 J m q J e 9 m f i f 1 0 l N / 8 b P u E x S g 5 I t F v V T Q U x M Z g m Q k C t k R k w s o U x x e y t h Q 2 o j M D a n k g 3 B W 3 5 5 l T Q v q t 5 V 9 f L h s l K 7 z e M o w g m c w j l 4 c A 0 1 u I c 6 N I D B E z z D K 7 w 5 Y + f F e X c + F q 0 F J 5 8 5 h j 9 w P n 8 A S z K S d A = = < / l a t e x i t > | { z } M is c = (µ 2 -µ 1 ) ∆ ⊤ (µ 2 -µ 1 ) ∥µ 2 -µ 1 ∥ 2 . ( ) Since we assume ∆ is positively correlated to the vector µ 2µ 1 , α = ∆ ⊤ (µ2-µ1) ∥µ2-µ1∥ 2 can be regarded as the magnitude of the domain shift along the direction µ 2µ 1 . Note that the results also hold for the case where ∆ is negatively correlated to µ 2µ 1 . The whole proof can be obtained by following the very similar proof steps for the positively correlated case. The mislabeling rate of the optimal source classifier f S on target data is: Pr (x,y)∼D T [f S (x) ̸ = y] = 1 2 Pr N (µ1+∆,σ 2 I d ) [h S (x) < 0|y = 1] + 1 2 Pr N (µ2+∆,σ 2 I d ) [h S (x) > 0|y = -1] We first calculate the first term of Eq. ( 13). Following the same tricks discussed above: Pr x∼N (µ1+∆,σ 2 I d ) [h S (x) < 0|y = 1] = Pr x∼N (µ1+c,σ 2 I d ) [h S (x) < 0|y = 1] = • • • {x|x ⊤ (µ1-µ2)< ∥µ 1 ∥ 2 -∥µ 2 ∥ 2 2 } 1 (2πσ 2 ) d 2 exp - ∥x -µ 1 -∆∥ 2 2σ 2 dx 1 dx 2 • • • dx d = • • • {x|-∞<x1,x2,...,x d-1 <∞,d1<x d } 1 (2πσ 2 ) d 2 exp - d i=1 x 2 i 2σ 2 dx 1 dx 2 • • • dx d = ∞ d1 1 2πσ 2 exp - x 2 d 2σ 2 dx d =Φ(- d 1 σ ), where d 1 = µ2-µ1 2 -c sign( µ2-µ1 2 -∥c∥). Similarly, the second term is given by: Pr x∼N (µ2+∆,σ 2 I d ) [h S (x) > 0|y = -1] =Φ(- d 2 σ ), where d 2 = µ2-µ1 2 + c . Taking Eq. ( 14) and Eq. ( 15) into Eq. ( 13), we have Pr (x,y)∼D T [f S (x) ̸ = y] = 1 2 Φ(- d 1 σ ) + 1 2 Φ(- d 2 σ ). C PROOFS FOR THEOREM 3.1 Proof. Without loss of generality, we choose to assume µ 2 = µ 1 + σ1 d as the convenient way to present our results. From the proof for Theorem B.1, we know that x 0 = µ1+µ2 2 + ∆ is at the decision boundary such that h T (x 0 ) = 0, where h T (x) = x ⊤ (µ 1 -µ 2 ) σ 2 - ∥µ 1 + ∆∥ 2 -∥µ 2 + ∆∥ 2 2σ 2 . Let f T be the optimal Bayes classifier for the target domain, which can be obtained the same way as f S mentioned in B.1. The equation h T (x 0 ) = 0 implies that Pr (x,y)∼D T [y = 1|X = x 0 ] = Pr (x,y)∼D T [y = -1|X = x 0 ]. Note that x 0 is on the affine hyperplane z where h T (z) = 0. Any data points on this hyperplane will have the equal probabilities to be correctly classified. We start from this hyperplane and calculate another point x 1 , where Pr (x,y)∼D T [y = 1|X = x 1 ] is at least 1-δ δ Pr (x,y)∼D T [y = -1|X = x 1 ]. Thus, for any points that are mislabeled and far away from x 1 will result in Pr (x,y)∼D T [y = 1|X = x 1 ] ≥ 1δ. We first aim to find such a data point x 1 . Let x 1 = x 0m 0 σ1 d , where m 0 is the scalar measures the distance between the point x 1 to the hyperplane z. We need to find m 0 such that P T (x 1 |y = 1) P T (x 1 |y = -1) ≥1 -δ, where P T (x 1 |y = 1) P T (x 1 |y = -1) = exp - ∥x 1 -µ 1 -∆∥ 2 2σ 2 + ∥x 1 -µ 2 -∆∥ 2 2σ 2 = exp - µ2-µ1 2 -m 0 σ1 d 2 2σ 2 + µ2-µ1 2 + m 0 σ1 d 2 2σ 2 = exp (m 0 d) Taking Eq. ( 18) into Eq. ( 17), we get m 0 ≥ (log 1-δ δ )/d. Since the isotropic Gaussian random vectors has the rotationally symmetric property, we can transform the integration of multivariate normal distribution to standard normal distribution with different intervals of integration. Then any data points from a region that have at most ∥x 1µ 1 -∆∥ distance to its mean µ 1 + ∆ will have at least 0.99 probability coming from the first component. Let the region R 1 be: R 1 = {x : ∥x -µ 1 -∆∥ ≤ ∥x 1 -µ 1 -∆∥} Equivalently, taking R 1 can be simplified: R 1 = {x : ∥x -µ 1 -∆∥ ≤ σ( √ d 2 - log 1-δ δ √ d )} The region R 1 is valid when data dimension d is large. This is realistic in practice. Since neural networks are usually dealing with high dimension data, for example d ≫ (1), the region R 1 is valid. On the other hand, we aim to find a region R 2 where all data points are mislabeled. From the proof for Theorem 1, the source classifier h S is given by h S (x) = x ⊤ (µ 1 -µ 2 ) σ 2 - ∥µ 1 ∥ 2 -∥µ 2 ∥ 2 2σ 2 . ( ) Any data points are classified to the second component if h S (x) < 0. Hence R 2 = {x : x ⊤ 1 d > σd + 2µ ⊤ 1 1 d 2 } We take the intersection of R 1 and R 2 , all data points from this intersection are (1) having at least 1δ probability coming from the first component, and ( 2) being classified to the second component. Formally, for (x, y) ∼ D T , if x ∈ R 1 R 2 , then Pr[f S (x) ̸ = y] ≥ 1 -δ, We note that x ∈ R 1 R 2 is non-empty when (log 1-δ δ )/d < α, where α = ∆ ⊤ (µ2-µ1) ∥µ2-µ1∥ 2 is the magnitude of the domain shift along with the direction µ 2µ 1 . Since x 1 is chosen from R 1 , to verify that R 1 R 2 is non-empty, we only need to verify that x 1 also belongs to R 2 . x 1 ∈ R 2 if and only if: x ⊤ 1 1 d > σd + 2µ ⊤ 1 1 d 2 (µ 1 + c + σ 2 1 d -m 0 σ1 d ) ⊤ 1 d > σd + 2µ ⊤ 1 1 d 2 (µ 1 + ασ1 d + σ 2 1 d -m 0 σ1 d ) ⊤ 1 d > σd + 2µ ⊤ 1 1 d 2 (α -m 0 )σd >0, where c = (µ 2 -µ 1 ) ∆ ⊤ (µ2-µ1) ∥µ2-µ1∥ 2 . Therefore, if α > m 0 ≥ (log 1-δ δ )/d, R 1 R 2 is non-empty. Next, we show Pr (x,y)∼D T [x ∈ R] increases as α increases. Let event A 0 be a set of x such that they are mislabeled by f S (i.e. f S (x) ̸ = y). Let event A 1 be a set of x such that they are from the first component but are mislabeled to the second component with a probability Pr[f S (x ̸ = y)] < 1δ. Let event A 2 be a set of x such that they are from the second component but are mislabeled to the first component with a probability Pr[f S (x ̸ = y)] < 1 -δ. Thus Pr (x,y)∼D T [x ∈ R] = Pr (x,y)∼D T [A 0 ] -Pr (x,y)∼D T [A 1 ] -Pr (x,y)∼D T [A 2 ] Let event A 3 be a set of x such that they are from the first component such that Pr [f S (x ̸ = y)] < 1-δ or Pr[f S (x = y)] < 1 -δ. Let event A 4 be a set of x such that they are from the second component but are mislabeled to the first component. For Pr[A 3 ], Pr (x,y)∼N (µ1+∆,σ 2 I d ) [A 3 ] = Pr (x,y)∼N (µ1+∆,σ 2 I d ) [R ∁ 1 ], which does not change as the domain shift ∆ varies. Meanwhile, Pr (x,y)∼N (µ2+∆,σ 2 I d ) [A 4 ] = Φ(- µ2-µ1 2 + c σ ), which is given by Eq. ( 15). By our assumption, the domain shift ∆ is positively correlated with the vector µ 2µ 1 . So when α increases, Pr (x,y)∼N (µ2+∆,σ 2 I d ) [A 4 ] decreases. Since A 1 ⊆ A 3 and A 2 A 4 , the probability measure on R is given by: Pr (x,y)∼D T [x ∈ R] = Pr (x,y)∼D T [A 0 ] -Pr (x,y)∼D T [A 1 ] -Pr (x,y)∼D T [A 2 ] ≥ Pr (x,y)∼D T [A 0 ] -Pr (x,y)∼D T [A 3 ] -Pr (x,y)∼D T [A 4 ], where the first term is the mislabeling rate that increases as α increases (given by Theorem B.1); the second term is a constant; the third term decreases as as α increases. The equality in Eq. ( 22) holds when α → ∞. Therefore, when the magnitude of the domain shift α increases, the lower bound of Pr (x,y)∼D T [x ∈ R] increases, which forces more points to break the conventional LLN assumption.

D BACKGROUND INTRODUCTION AND PROOFS FOR LEMMA 3.2

Learning with label noise is an important task and topic in deep learning and modern artificial intelligence research. The main idea behind it is robust training, which can be further divided into fine-grained categories, such as robust architecture, robust regularization, robust loss design, and simple selection (Song et al., 2022) . For example, for the robust architecture-based methods, they propose to modify the deep model's architecture, including adding an adaptation layer or leveraging a dedicated module, to learn the label transition process and to tackle the noisy label. In addition, the robust regularization approaches usually enforce the DNN to overfit less to false-labeled examples by adopting a regularizer, explicitly or implicitly. For instance, Yi et al. (2022) proposed to utilize a contrastive regularization term to learn a noisy-label robust representation. Recently, with the widespread implementation of AI technologies, the topics of trustworthiness and fairness have drawn a lot of interest (Liu et al., 2019; Shui et al., 2022a; b) . How to provide trustworthy and fair learning in LLN problems is a significant research direction. In this paper, we will, however, develop our discussion based on the robust loss methods in LLN. In this section, we will first introduce the concepts and technical details of some noise-robust loss based LLN methods, including GCE (Zhang & Sabuncu, 2018) , SL (Wang et al., 2019b) , NCE (Ma et al., 2020) , and GJS (Englesson & Azizpour, 2021) . Then, we will present the proof details of Lemma 3.2.

D.1 NOISE-ROBUST LOSS FUNCTIONS IN LLN METHODS

Among the numerous studies of LLN methods, loss correction is a major branch of research. The main idea of loss correction is to modify the loss function and make it robust to noisy labels. As indicated in Ma et al. (2020) , the loss function ℓ is defined to be noise robust if K k=1 ℓ(h(x), k) = C, where C is a positive constant and K is the overall class number of label space. For example, the most widely utilized Cross-Entropy (CE) loss is unbounded and therefore is not robust to the label noise. Some LLN studies show that existing loss functions such as mean absolute error (MAE) (Ghosh et al., 2017) , reverse cross entropy (RCE) (Wang et al., 2019b) , normalized cross entropy (NCE) (Ma et al., 2020) , and normalized focal loss (NFL) are noise-robust and that combining them with CE can help mitigate the sensitivity of the model to noisy labels. More specifically, for a given data (x, y) and a classifier h(x), GCE (Zhang & Sabuncu, 2018) leverages the negative Box-Cox transformation as a loss function, which can exploit the benefits of both the noise-robustness provided by MAE and the implicit weighting scheme of CE: ℓ GCE (h(x), e k ) = (1h k (x) q ) q where q ∈ (0, 1] is a hyperparameter to be decided. Another noise-robust loss based method SL (Wang et al., 2019b) proposes combining the reverse cross entropy (RCE) loss, which is noise tolerant, with CE loss and obtain the L SL : ℓ SL = αℓ CE + βℓ RCE = -(α K k=1 q(k|x)logp(k|x) + β K k=1 p(k|x)logq(k|x)) where p(k|x) is the predicted distribution over labels by classifier h(x) and q(k|x) is the ground truth class distribution conditioned on sample x. GJS (Zhang & Sabuncu, 2018) utilizes the multi-distribution generalization of Jensen-Shannon Divergence as loss function, which has been proven noise-robust and is in fact a generalization of CE and MAE. Concretely, the generalized JS divergence and GJS loss are defined as: D GJSπ = M i=1 π i D KL p (i) M j=1 π j p (j) ℓ GJS (x, y, h) = D GJSπ (y, h(x (2) ), ..., h(x (M ) )) Z where π, p (i) are categorical distributions over K classes, x(i) ∼ A(x), a random perturbation of sample x, and Z = -(1 -π 1 )log(1 -π 1 ) Further, Ma et al. (2020) shows a simple loss normalization scheme which can be applied for any loss L: ℓ NORM = ℓ(h(x), y) K k=1 ℓ(h(x), k) The study found that the normalized loss can indeed satisfy the robustness condition. However, it will also cause an underfitting problem in some situations. Note that generalized cross entropy (GCE (Zhang & Sabuncu, 2018) ) extends MAE and symmetric loss (SL (Wang et al., 2019b )) extends RCE. So we study GCE and SL in our experiments instead studying MAE and RCE. Besides, GJS (Englesson & Azizpour, 2021 ) is shown to be tightly bounded around K k=1 ℓ(h(x), k). All these methods have shown to be noise tolerant under either bounded random label noise or bounded class-conditional label noise with additional assumption that R(h ⋆ ) = 0. We show that under the same assumption with unbounded label noise datasets, these methods are not noise tolerant in section D.2.

D.2 PROOFS FOR LEMMA 3.2

Proof. Let η yk (x) be the Pr[ Ỹ = k|Y = y, X = x] probability of observing a noisy label k given the ground-truth label y and a sample x. Let η y (x) = k̸ =y η yk (x). The risk of h under noisy data is given by R(h) =E x,ỹ [ℓ LLN (h(x), ỹ)] =E x E y|x E ỹ|x,y [ℓ LLN (h(x), ỹ)] =E x,y (1 -η y (x))ℓ LLN (h(x), y) + k̸ =y η yk (x)ℓ LLN (h(x), k) =E x,y (1 -η y (x)) K k=1 ℓ LLN (h(x), k) - k̸ =y ℓ LLN (h(x), k) + k̸ =y η yk (x)ℓ LLN (h(x), k) =E x,y (1 -η y (x)) C - k̸ =y ℓ LLN (h(x), k) + k̸ =y η yk (x)ℓ LLN (h(x), k) =E x,y (1 -η y (x))C -E x,y k̸ =y 1 -η y (x) -η yk (x) ℓ LLN (h(x), k) . ( ) Since Eq. ( 23) holds for both h⋆ and h ⋆ , we have R( h⋆ ) = E x,y (1 -η y (x))C -E x,y k̸ =y 1 -η y (x) -η yk (x) ℓ LLN ( h⋆ (x), k) and R(h ⋆ ) = E x,y (1 -η y (x))C -E x,y k̸ =y 1 -η y (x) -η yk (x) ℓ LLN (h ⋆ (x), k) . ( ) As h⋆ is the minimizer of R(h), R( h⋆ ) ≤ R(h ⋆ ). Then we combine Eq. ( 24) and Eq. ( 25), we have E x,y k̸ =y 1 -η y (x) -η yk (x) ℓ LLN (h ⋆ (x), k) -ℓ LLN ( h⋆ (x), k) ≤ 0. We note that ℓ LLN ( h⋆ (x), k) ≥ ℓ LLN (h ⋆ (x), k) implies p k (x) = 0 and p y (x) = 1 for k ̸ = y, where p k (x) is the probability output by h⋆ for predicting the sample x to be the class k. This argument is proved given by Wang et al. (2019b) ; Ghosh et al. (2017) ; Yang et al. (2021b) ; Ma et al. (2020 ) (Theorem 1&2 in Ghosh et al. (2017) , Theorem 1 in Wang et al. (2019b) , Lemma 1&2 in Ma et al. (2020) and Theorem 1&2 in Englesson & Azizpour (2021) ). To let ℓ LLN ( h⋆ (x), k) ≥ ℓ LLN (h ⋆ (x), k) holds for all inputs x, previous studies assume the bounded label noise, which is given by 1 -η y (x) -η yk (x) > 0 ∀x s.t. P (X = x) > 0. ( ) For random label noise which assumes that the mislabeling probability from the ground-truth label to any other label is the same for all inputs, i.e. η ji (x) = a 0 ∀i ̸ = j, where a 0 is a constant. Let η = (K -1)a 0 , then Eq. ( 27) is degraded to 1 -η - η K -1 > 0 1 > K K -1 η η < 1 - 1 K . This bounded assumption is commonly assumed by Wang et al. (2019b) For class-conditional label noise, which assumes the η ji (x 1 ) = η ji (x 2 ) for any inputs x 1 and x 2 . Let η ji (x) = η ji , Then the bounded assumption Eq. ( 27) is degraded to η yk < 1 -η y . This bounded assumption is also commonly assumed, and it can be found in Theorem 2 in Ghosh et al. (2017) , Theorem 1 in Wang et al. (2019b) , 2 in Ma et al. (2020) and Theorem 2 in Englesson & Azizpour (2021) . However, in SFDA, we proved that the following event B holds with a probability at least 1δ: 1 -η y (x) -η yk (x) < 0 ∀x ∈ R. Indeed, we first denote B 1 = {ỹ ̸ = y|x ∈ R} by the event that x ∈ R is mislabeled. Then Pr[B] = Pr[B|B 1 ] + Pr[B|B ∁ 1 ] Pr[B ∁ 1 ] ≥ Pr[B|B 1 ] Pr[B 1 ] ≥ 1 -δ Given the result in Eq. ( 28), and combined it with the Eq. ( 26), we have ℓ LLN ( h⋆ (x), k) ≤ ℓ LLN (h ⋆ (x), k). When the event B holds, the condition ℓ LLN ( h⋆ (x), k) ≤ ℓ LLN (h ⋆ (x), k) holds. Note that only ℓ LLN ( h⋆ (x), k) ≥ ℓ LLN (h ⋆ (x), k) means p k (x) = 0 for k ̸ = y and p y (x) = 1 for k ̸ = y. It means that the optimal classifier h⋆ from noisy data can make correct predictions on any inputs, which is consistent with the optimal classifier h ⋆ obtained from clean data. As for the condition ℓ LLN ( h⋆ (x), k) ≤ ℓ LLN (h ⋆ (x), k), we can get p k (x) = 1 for a k ̸ = y, which means that the optimal classifier h⋆ from noisy data cannot make correct predictions on samples x ∈ R. To verify this, we use the robust loss function RCE ℓ RCE as an example, and it can be easily generalized to other robust los functions mentioned above. Based on the definition of the RCE loss (Wang et al., 2019b) , we have ℓ RCE ( h⋆ (x), k) =C RCE (1 -p k (x)) ℓ RCE (h ⋆ (x), k) =C RCE , where C RCE > 0 is a constant. The above equations show that any 0 ≤ p k (x) ≤ 1 can make the condition ℓ LLN ( h⋆ (x), k) ≤ ℓ LLN (h ⋆ (x), k) hold. Meanwhile, h⋆ is the global minimizer of the risk over the noisy data, which makes h⋆ memorize the noisy dataset. Therefore, h⋆ makes incorrect predictions for x ∈ R such that p k (x) = 1 for a k ̸ = y, and h ⋆ is the global optimal over clean data, which gives correct predictions for x ∈ R such that p k (x) = 1 for a k = y. That completes the proof as h ⋆ makes different predictions on x ∈ R compared to h⋆ .

E PROOFS FOR THEOREM 4.1

The proof for Theorem 4.1 is partially adopted from Liu et al. (2020) . Note that we are dealing with unbounded label noise, whereas the bounded label noise is considered in Liu et al. (2020) . As indicated in Liu et al. (2020) , T is set as the smallest positive integer such that θ ⊤ T µ ≥ 0.1, and T = Ω(1/η) with high probability. Parameters θ is initialized by Kaiming initialization (He et al., 2015) that θ 0 ∼ N (0, 2 d I d ), and |θ ⊤ 0 µ| converges in probability to 0. For simplicity, we assume θ 0 = 0 without loss of generality. The proof consists of two parts. The first part is to show that θ T -1 is highly positively correlated with the ground truth classifier. The second part is to show that the prediction accuracy on mislabeled samples can be represented as the correlation between the learned classifier and the ground truth classifier. Proof. To begin with, we show the first part. Let samples x i = y i (µσz i ), where z ∼ N (0, I d ). The gradient of the logistic loss function with respect to the parameter θ is given by: ∇ θ L(θ t ) = 1 2n n i=1 x i tanh(θ ⊤ t x i ) -ỹi = - 1 2n n i=1 ỹi x i ① + 1 2n n i=1 x i tanh(θ ⊤ t x i ) ② Then we will show that -µ ⊤ ∇ θ L(θ t ) is lower bounded by a positive number. We first show the bound on ①in Eq. ( 29). Since x i is sampled from standard normal distribution, 1 n n i=1 ỹi µ ⊤ x i has limited variance. By the law of large number, 1 n n i=1 ỹi µ ⊤ x i converges in probability to its mean. Therefore, E[ỹx ⊤ µ] =E[ỹµ ⊤ x1{yx ⊤ µ ≤ r}] + E[ỹµ ⊤ x1{yx ⊤ µ > r}] =E[E[ỹµ ⊤ x1{yx ⊤ µ ≤ r}]|y] + E[E[ỹµ ⊤ x1{yx ⊤ µ > r}]|y] =E[-µ ⊤ x1{x ⊤ µ ≤ r}|y = 1] + E[µ ⊤ x1{x ⊤ µ > r}|y = 1] Note that x|y = 1 is a Gaussian random vector with independent entries, we have x ⊤ µ d = w + 1, where w ∼ N (0, σ 2 ). Therefore, the above expectation is equivalent to E[ỹx ⊤ µ] = - r-1 -∞ (w + 1) dP w + ∞ r-1 (w + 1) dP w = - r-1 -∞ w dP w + +∞ r-1 w dP w - r-1 -∞ dP w + +∞ r-1 dP w = 1-r r-1 dP w - r-1 -∞ w dP w + +∞ r-1 w dP w =Erf[ 1 -r √ 2σ ] + 2 σ √ 2π exp - (r -1) 2 2σ 2 , ( ) where Erf[x] = 2 √ π x 0 e -t 2 dt. Note that r < 1, which means that most half of samples are mislabeled. Thus 1 2 E[ỹ i µ ⊤ x i ] = 1 2 Erf[ 1 -r √ 2σ ] + σ √ 2π exp - (r -1) 2 2σ 2 > 0. Now we deal with the ②in in Eq. ( 29). 1 2n |µ ⊤ n i=1 tanh(θ ⊤ t x i ) | = 1 2n |q ⊤ p| ≤ 1 2n ∥q∥ ∥p∥ , q = (µ ⊤ x 1 , µ ⊤ x 2 , . . . , µ ⊤ x n ) ∈ R n , and p = (tanh(θ ⊤ t x 1 ), tanh(θ ⊤ t x 2 ), . . . , tanh(θ ⊤ t x n )) ∈ R n . By triangle inequality of the norm, ∥q∥ = ∥q -1 + 1∥ ≤ ∥q -1∥ + ∥1∥ = √ n + ∥q -1∥ , where q -1 is a random vector with Gaussian coordinates. By Lemma E.1, ∥q -1∥ /σ ≤ 2σ √ n with probability 1δ when n ≥ c 1 log 1/δ, where c 1 is a constant. On the other hand, p -tanh(θ ⊤ t µ)1 n + tanh(θ ⊤ t µ)1 n ≤ tanh(θ ⊤ t µ)1 n + p -tanh(θ ⊤ t µ)1 n ≤ tanh(θ ⊤ t µ)1 n + ∥θ t ∥ ∥q -1∥ =tanh(θ ⊤ t µ) √ n + 2σ √ n ∥θ t ∥ , where the second inequality is by Lemma 9 from Liu et al. (2020) , the last inequality by Lemma E.1. Then we take Eq. ( 31) and Eq.( 33) together, and then take them and Eq.( 30) into -µ ⊤ ∇ θ L(θ t ), which gives us: -∇ θ L(θ t ) ⊤ µ ≥ 1 2 Erf[ 1 -r √ 2σ ] + σ √ 2π exp - (r -1) 2 2σ 2 -σ(tanh(θ ⊤ t µ) + 2σ ∥θ t ∥) By Lemma 8 from Liu et al. (2020) , we have sup θ∈R d ∥∇ θ L(θ)∥ ≤ 1 + 2σ. Therefore, Eq. ( 34) can be rewritten as: -∇ θ L(θ t ) ⊤ µ ∥∇ θ L(θ t )∥ ≥ Erf[ 1-r √ 2σ ] + 2 σ √ 2π exp -(r-1) 2 2σ 2 1 + 2σ - σ(tanh(θ ⊤ t µ) + 2σ ∥θ t ∥) 1 + 2σ ≥ b 0 1 + 2σ - σ(tanh(θ ⊤ t µ) + 2σ ∥θ t ∥) 1 + 2σ , where we let b 0 = 1 2 Erf[ 1-r √ 2σ ] + σ √ 2π exp -(r-1) 2 2σ 2 . Then we prove -∇ θ L(θt) ⊤ µ ∥∇ θ L(θt)∥ ≥ 1 10 b0 1+2σ by mathematical induction, which can help us get rid of the dependence on θ t for the lower bound in Eq. ( 35). For t = 0, the inequality holds trivially. By the gradient descent algorithm, θ t+1 = -η t i=0 ∇ θ L(θ i ), where -µ ⊤ ∇ θ L(θ i )/ ∥∇ θ L(θ i )∥ ≥ 1 10 b0 1+2σ . θ ⊤ t+1 µ ∥θ t+1 ∥ ≥ -η t i=0 µ ⊤ ∇ θ L(θ i ) η t i=0 ∇ θ L(θ i ) ≥ 1 10 b0 1+2σ ( t i=0 ∥∇ θ L(θ i )∥) t i=0 ∥∇ θ L(θ i )∥ ≥ 1 10 b 0 1 + 2σ As t + 1 < T , we have ∥θ t+1 ∥ ≤ 10 1+2σ b0 θ ⊤ t+1 µ ≤ 1+2σ b0 . Taking it into Eq. ( 35), we have -∇ θ L(θ t ) ⊤ µ ∥∇ θ L(θ t )∥ ≥ b 0 1 + 2σ - σ(0.1 + 1+2σ b0 ) 1 + 2σ To show -∇ θ L(θt) ⊤ µ ∥∇ θ L(θt)∥ is lower bounded by 1 10 b0 1+2σ , we need to have h(σ) = 9 10 b 0 1 + 2σ -σ(0.1 + 1 + 2σ b 0 ) > 0 It is straightforward to verify that h(σ = 0) > 0 and it can be verified that when 0 < σ < c 0 , we have h ′ (σ) > 0. Therefore, for 0 < σ < c 0 and any t < T -1 -∇ θ L(θ t ) ⊤ µ ∥∇ θ L(θ t )∥ ≥ 1 10 b 0 1 + 2σ Hence by gradient descent algorithm θ T = -η T -1 i=0 ∇ θ L(θ i ) and the same proof above, we have θ ⊤ T µ ∥θ T ∥ ≥ 1 10 b 0 1 + 2σ For the second part: the prediction accuracy on mislabeled sample set B converges in probability to its mean. Therefore, the expectation of the prediction accuracy on mislabeled samples is given by E[1{sign(θ ⊤ T x) = y}] =E[1{sign(yθ ⊤ T (µ -σz)) = y}] =E[1{sign(θ ⊤ T (µ -σz)) = 1}] = Pr[σθ ⊤ T z > θ ⊤ T µ] Note that z is a standard Gaussian vector, θ ⊤ T z is distributed as N (0, ∥θ T ∥ 2 ) Thus, Eq. ( 37) is equivalent to Φ( θ ⊤ T µ σ∥θ T ∥ ). By the inequality 1 -Φ(x) ≤ exp{-x 2 /2} for x > 0, then we have Φ( θ ⊤ T µ σ ∥θ T ∥ ) ≥ 1 -exp{- ( θ ⊤ T µ σ∥θ T ∥ ) 2 2 } ≥ 1 -exp{- 1 200 b 0 (1 + 2σ)σ 2 } We denote g(σ) by: g(σ) = Erf[ 1-r √ 2σ ] 2(1 + 2σ)σ + exp (-(r-1) 2 2σ 2 ) √ 2π(1 + 2σ) , where g(σ) > 0 for any σ > 0. Note that g(σ) → ∞ when σ → 0, and g(σ) is monotone decreasing as σ increases since g ′ (σ) < 0 for σ > 0. Lemma E.1. Let X = (X 1 , X 2 , . . . , X n ) ∈ R n be a random vector with independent, Gaussian coordinates X i with E[X i ] = 0 and E[X 2 i ] = 1 < ∞. Then Pr[| ∥X∥ 2 - √ n| ≥ √ n] ≤ 2 exp -an , where a > 0 is a constant. Proof. The Gaussian concentration result is taken from Proposition 5.34 in Vershynin ( 2018), which will be used here for proving Theorem 4.1.

F ADDITIONAL LEARNING CURVES

We provide additional learning curves on DomainNet dataset, shown in Figure 10 . The dataset contains 12 pairs of tasks showing: (1) target classifiers have higher prediction accuracy during the early-training time; (2) leverage ETP by using ELR can alleviate the memorization of unbounded noisy labels generated by source models. 

G EXPERIMENTAL DETAILS

In this section, we additionally show the overall training process of our method, illustrated in Figure 11 and in Algorithm 1. Besides, we provide more experimental information of our paper in details. Datasets. We use four benchmark datasets, which have been widely utilized in the Unsupervised Domain Adaptation (UDA) (Long et al., 2015; Tan et al., 2020; Wang et al., 2022) and Source-Free Implementation. We use ResNet-50 (He et al., 2016) for Office-31, Office-Home and DomainNet, and ResNet-101 (He et al., 2016) for VisDA as backbones. We adopt a fully connected (FC) layer as the feature extractor on the backbone and another FC layer as the classifier head. The batch normalization layer is put between the two FC layers and the weight normalization layer is implemented on the last FC layer. We set the learning rate to 1e-4 for all layers except for the last two FC layers, where we apply 1e-3 for the learning rate for all datasets. The training for source models are set to be consistent with the SHOT (Liang et al., 2020) . The hyperparameters for ELR with self-training, ELR with SHOT, ELR with G-SFDA, and ELR with NRC on four different datasets are shown in Table 5 . We note that for ELR with self-training, there is only one hyperparameter β to tune. The hyperparameters for existing SFDA algorithms are set to be consistent with their reported values for Office-31, Office-Home, and VisDA datasets. As these SFDA algorithms have not reported their performance for DomainNet dataset, We follow the hyperparameter search strategy from their work (Liang et al., 2020; Yang et al., 2021a; b) , and choose the optimal hyperparameters β = 0.3 for SHOT, K = 5 and M = 5 for NRC, and k = 5 for G-SFDA. the self-regularization loss, where S i means the stored prediction in the memory bank, a constant vector and is identical to the h(x i ) as in NRC they update the memory banks before the training.

I.1 THEORETICAL ANALYSIS OF NRC'S SELF-REGULARIZATION TERM COMPARED TO ELR

To emphasize the novelty of our proposed ELR in SFDA problems, we will compare the original formulas and also the gradients of ELR and NRC's self-regularization (SR) term in detail. And then, we will explain why NRC can not benefit from the ETP only by adopting the SR term. As we formulate in the main paper, we can represent the ELR loss and the SR loss as follows: The motivation of the SR term is to emphasize the ego feature of current prediction and, therefore, to reduce the potential impact of noisy neighbors, whereas the ELR proposed in this paper considers the changes of prediction quality during the training process and aims to encourage the model prediction to stick to the early-time predictions for each data point. L ELR (θ t ) = As shown in Eq. ( 38) and Eq. ( 39), we can directly observe that ELR involves the previous training step's prediction information in loss (included in ȳt ), however, SR leverages only the prediction result of current step. Besides, if we further look at the gradient formulas of these two losses and analyze the backpropagation process, we can find that the gradient dLELR(θt) df (x;θt) increases as the model prediction closes to the target ȳt , which will further force the prediction f (x; θ t ) close to ȳ thanks to the large magnitude of the gradient. And this will help with the utilization of early-time predictions and ETP. However, the gradient of L SR is a constant vector with values of prediction logits, which could be very small. So when dLSR(θt) df (x;θt) is small, SR term can be easily overwhelmed by the other loss terms that favour fitting incorrect pseudo labels, leading to poor performance. The above analysis shows a fundamentally different difference between SR and ELR. Specifically, SR does not utilize ETP and cannot handle the unbounded label noise either.

I.2 EMPIRICAL ANALYSIS OF ELR AND NRC IN TERMS OF THE UTILIZATION OF ETP

In addition to the above theoretical analysis for the loss functions, we also observed the same conclusion through experiments. As shown in Figure 12 , we observe that thanks to the update of the pseudo-label with the process of adaptation in the SFDA method, overall, NRC can obtain a model with relatively high accuracy on the target domain. However, the performance drop still exists when using the NRC method alone, which can be effectively avoided by adding the ELR term. This confirms that ELR can effectively leverage ETP and avoid the problem of noisy label memorization.

J ADDITIONAL DISCUSSION OF PSEUDO-LABEL PURIFICATIONS IN SFDA AND LLN APPROACHES

In this section, we will further discuss the similarities and the differences between the LLN approaches and the pseudo-label purification processes proposed in current SFDA methods. The main similarity between the existing SFDA approaches and the LLN methods is that both research fields have to deal with data with noise, aiming to get a model with promising performance. As for the differences, they can be mainly divided into the following aspects. From the perspective of motivations, most of the existing SFDA approaches are developed under the domain adaptation setting. They study how to best exploit the distribution relationship between the source and target domains in the absence of source data, so as to achieve domain adaptation better. Their motivation is to investigate how to better assign the pseudo-label. In contrast, LLN is an independent field that mainly studies, given a set of noisy data, how to deal with the label noise, conduct the model training, and obtain a noise-robust model with better performance. Traditionally, the study of LLN does not involve assumptions about the data domain or source model. Meanwhile, there are more in-depth and rigorous studies (theoretical and methodological) on the types of noises, and how to handle and exploit them. From the perspective of the methodology, in order to obtain a higher quality pseudo-label, many SFDA methods heuristically use clustering or neighbor features to correct the pseudo-labels, and use the corrected labels to perform a normal supervised learning. The current SFDA methods focus on the explicit pseudo-label purification process, which can be summarized as noisy label correction. However, for LLN, the noisy label correction is just a research sub-branch. LLN also includes many other research directions, such as studies of different label noise types, research about how to utilize and even benefit from label noise in the training process, and how to train the model more robustly. Many noise-robust loss functions and related theoretical analyses have been developed. We would like to emphasize that the motivation of our paper is to investigate how to study SFDA from the perspective of learning with label noise. We combine the characteristics of SFDA with the LLN approaches and discover the unbounded nature of label noise in SFDA. Further, we rigorously distinguish which LLN methods can help SFDA problems and which approaches are limited in their use in SFDA. We believe that the studies of LLN can open new avenues for the research of SFDA and bring more ideas and inspiration to the design of the SFDA algorithm. Figure 16 : Figure 14 with different y-scale to better show learning details of the unbounded label noise.



Figure 1: (i) (a) The SFDA problem can be formulated as an LLN problem. (b) The existing SFDA algorithms using the local cluster information cannot address label noise due to the unbounded label noise (Section 3). (c) We prove that ETP exists in SFDA, which can be leveraged to address the unbounded label noise (Section 4). (ii) Observed Label Noise Phenomena on VisDA dataset.

Figure 2: Training accuracy on various target domains. The source models initialize the classifiers and annotate unlabeled target data. As the classifiers memorize the unbounded label noise very fast, for the first 90 steps, we evaluate the prediction accuracy on target data every batch, and one step represents one training batch. After the 90 steps, we evaluate the prediction accuracy for every 0.3 epoch, shown as one step. We use the CE, GCE, and ELR to train the classifiers on the labeled target data, shown in solid green lines, solid orange lines, and solid blue lines, respectively. The dotted red line represents the accuracy of labeling target data. Eventually, the classifiers memorize the label noise, and the prediction accuracy equals the labeling accuracy (shown in (iii-iv)). Additional results on transfer pairs can be found in Appendix F.

t e x i t s h a 1 _ b a s e 6 4 = " h 1 j 1 / h g J+ G Z r T Y / U c Y l + x T w R v O g = " > A A A C B H i c b V A 9 S w N B E N 2 L X z F + n V q m W Q y C V b g T U c u g j W U E 8 w H J E f Y 2 c 8 m S vQ 9 2 5 8 R w p L D x r 9 h Y K G L r j 7 D z 3 7 h J r t D E B w O P 9 2 a Y m e c n U m h 0 n G + r s L K 6 t r 5 R 3 C x t b e / s 7 t n 7 B 0 0 d p 4 p D g 8 c y V m 2 f a Z A i g g Y K l N B O F L D Q l 9 D y R 9 d T v 3 U P S o s 4 u s N x A l 7 I B p E I B G d o p J 5 d 7 i I 8 Y D Y 0 l k q Y

1 4 m z d O q e 1 4 9 u z 2 r 1 K 7 y O I q k T I 7 I C X H J B a m R G 1 I n D c L J I 3 k m r + T N e r J e r H f r Y 9 5 a s P K Z Q / I H 1 u c P 1 s S Y 3 A = = < / l a t e x i t > hyperparameter (b) (c) embedding different LLN methods into SFDA algorithms < l a t e x i t s h a 1 _ b a s e 6 4 = " Z 4 3 2 i 6 X R H J j v s 3 k 6 U D D C i l 7 O a 8 8 = " > A A A C B n i c b V C 7 S g N B F J 3 1 G e M r a i n C Y B C s w q 4 E t Q z a W E Y w D 0 h C m J 2 9 m w y Z n V 1 m 7 o p h S W X j r 9 h Y K G L r N 9 j 5 N 0 4 e h S Y e G D i c c x 9 z j 5 9 I

Figure 3: (a)-(b) show the test accuracy on the DomainNet dataset with respect to hyperparameters of ELR. (c) shows the test accuracy of incorporating various existing LLN methods into the SFDA methods on the DomainNet dataset. 5.1 DISCUSSION ON EXISTING LLN METHODS

Figure 4: Evaluation of label noise methods on SFDA problems. We use source models as an initialization of classifiers trained on target data and also use source models to annotate unlabeled target data. Then we treat the target datasets as noisy datasets and use different label noise methods to solve the memorization issue.

Figure 5: True/False Neighbors on Office-Home

Figure 6: True/False Neighbors on VisDA

PROOFS FOR THEOREM B.1 Proof. The Bayes classifier f S predicts x to the first component when log Pr[y = 1|X = x] Pr[y = -1|X = x] > 0.

Figure 8: Plot of Mislabeling Rate with different α. We define c as the projection of the domain shift ∆ on the vector µ 2µ 1 , and α represents the magnitude of domain shift projected on µ 2µ 1 .

la b e li n g A r e a : w it h a v e r y h ig h p r o b a b il it y

Figure 9: Illustration of noisy labels generated by the domain shift.

; Ghosh et al. (2017); Yang et al. (2021b); Ma et al. (2020) (Theorem 1 in Ghosh et al. (2017), Theorem 1 in Wang et al. (2019b), Lemma 1 in Ma et al. (2020) and Theorem 1 in Englesson & Azizpour (2021)).

Figure 10: The source models are used to initialize the classifiers and annotate unlabeled target data.As the classifiers memorize the unbounded label noise very fast, we evaluate the prediction accuracy on target data every batch for the first 90 steps. After the 90 steps, we evaluate the prediction accuracy for every 0.3 epoch. We use the CE and ELR to train the classifiers on the labeled target data, shown in solid green lines and solid blue lines, respectively.

Figure 12: Fine-Grained Training Accuracy of NRC and NRC + ELR on Office-Home dataset. The solid green lines represent the training process of NRC, whereas the solid blue lines represent the training process of NRC with ELR term. The colored bands represent the performance drop.

Figure 13: Training accuracy on Office-Home dataset. The solid green lines represent the unbounded label noise in SFDA, whereas the solid red lines represent the bounded label noise.

Figure 14: Training accuracy on Office-31 dataset. The solid green lines represent the unbounded label noise in SFDA, whereas the solid red lines represent the bounded label noise.

Figure 15: Figure 13 with different y-scale to better show learning details of the unbounded label noise.

Accuracies (%) on Office-Home for ResNet50-based methods.

Accuracies (%) on DomainNet for ResNet50-based methods. Liang et al., 2020) ✓ 73.3 80.1 65.8 91.4 74.3 69.2 91.9 77.0 66.2 87.4 81.3 75.0 77.7 +ELR ✓ 78.0 81.9 67.4 91.1 75.9 71.0 92.6 79.3 68.0 88.7 84.8 77.0 79.7

Accuracies (%) on VisDA-C (Synthesis → Real) for ResNet101-based methods. Liang et al., 2020) ✓ 94.3 88.5 80.1 57.3 93.1 94.9 80.7 80.3 91.5 89.1 86.3 58.2 82.9 +ELR ✓ 95.8 84.1 83.3 67.9 93.9 97.6 89.2 80.1 90.6 90.4 87.2 48.2 84.1

Accuracies (%) on Office-31 for ResNet50-based methods.

Optimal Hypermaraters (β/λ) on various datasets. Domain Adaptation (SFDA)(Liang et al., 2020) scenarios, to verify the effectiveness of leveraging the early-time training phenomenon to address unbounded label noise. Office-31(Saenko et al., 2010) contains 4, 652 images in three domains (Amazon, DSLR, and Webcam), and each domain consists of 31 classes. Office-Home(Venkateswara et al., 2017) contains 15, 550 images in four domains (Real, Clipart, Art, and Product), and each domain consists of 65 classes. VisDA(Peng et al., 2017) contains 152K synthetic images and 55K real object images with 12 classes. DomainNet (Peng et al., 2019) contains around 600K images in six different domains (Clipart, Infograph, Painting, Quickdraw, Real and Sketch). Following previous work Tan et al. (2020); Liu et al. (2021a), we select 40 the most commonly-seen classes from four domains: Real, Clipart, Painting, and Sketch.

log(1 -ȳ⊤ t f (x; θ t ))(38)andL SR (θ t ) = -ŷ ⊤ t f (x; θ t )(39) where ȳt = β ȳt-1 +(1-β)f (x; θ t ) in ELR is the moving average prediction for x, and ŷt = f (x; θ t ) in SR is the constant vector copied from the current training step's prediction. Besides, the gradients of ELR and SR are:

ACKNOWLEDGMENTS

This work is supported by Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery Grants program.

annex

 (Liu et al., 2020; Xia et al., 2020a) . We highlight that the main factor causing the difference is the label noise. To show it, we replace the unbounded label noise in SFDA with bounded random label noise, and we keep the other settings unchanged as introduced in 4. To replace the unbounded label noise with bounded random label noise, we use the source model to identify mislabeled target samples, then we assign random labels to these mislabeled samples. Figure 13 and Figure 14 show the learning curves on Office-Home and Office-31 datasets with unbounded label noise and random bounded label noise. There are some existing LLN methods such as PCL (Zhang et al., 2021) to purify noisy labels every epoch based on ETP. Due to this difference, these LLN methods are not helpful to solving label noise in SFDA as they are not able to capture the benefits of ETP. Our empirical results in Section 5.1 can support this argument. We also note that PCL does not suffer from the fast memorization speed and is able to capture the benefits of ETP in conventional LLN settings. As we indicated in Figures 13-14, it takes much longer time (more than a few epochs) for target classifiers to start memorizing bounded noisy labels. We hope these insights can motivate the researcher to consider memorization speed and design algorithms better for SFDA.

I ADDITIONAL ANALYSIS OF ELR AND A STANDARD SFDA METHOD -NRC

In this section, we will theoretically and empirically compare ELR and NRC in detail. Specifically, NRC (Yang et al., 2021a ) is a well-known SFDA method that explores the neighbors of target data by graph-based methods and utilizes these neighbors' information to correct the target data's pseudo-label, in order to boost the SFDA performance. The proposed NRC loss has the following form:the diversity loss where pk is the empirical label distribution and q is a uniform distribution; andthe neighbors loss, where m is the index of the m-th nearest neighbors of x i , S m is the m-th item in memory bank S, A im is the affinity value of m-th nearest neighbors of input x i in the feature space.the expanded neighbors loss, where E m N contain the N -nearest neighbors of neighbor m in N M .

