TOWARDS STABLE TEST-TIME ADAPTATION IN DYNAMIC WILD WORLD

Abstract

Test-time adaptation (TTA) has shown to be effective at tackling distribution shifts between training and testing data by adapting a given model on test samples. However, the online model updating of TTA may be unstable and this is often a key obstacle preventing existing TTA methods from being deployed in the real world. Specifically, TTA may fail to improve or even harm the model performance when test data have: 1) mixed distribution shifts, 2) small batch sizes, and 3) online imbalanced label distribution shifts, which are quite common in practice. In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability. Conversely, TTA can perform more stably with batch-agnostic norm layers, i.e., group or layer norm. However, we observe that TTA with group and layer norms does not always succeed and still suffers many failure cases. By digging into the failure cases, we find that certain noisy test samples with large gradients may disturb the model adaption and result in collapsed trivial solutions, i.e., assigning the same class label for all samples. To address the above collapse issue, we propose a sharpness-aware and reliable entropy minimization method, called SAR, for further stabilizing TTA from two aspects: 1) remove partial noisy samples with large gradients, 2) encourage model weights to go to a flat minimum so that the model is robust to the remaining noisy samples. Promising results demonstrate that SAR performs more stably over prior methods and is computationally efficient under the above wild test scenarios.

1. INTRODUCTION

Deep neural networks achieve excellent performance when training and testing domains follow the same distribution (He et al., 2016; Wang et al., 2018; Choi et al., 2018) . However, when domain shifts exist, deep networks often struggle to generalize. Such domain shifts usually occur in real applications, since test data may unavoidably encounter natural variations or corruptions (Hendrycks & Dietterich, 2019; Koh et al., 2021) , such as the weather changes (e.g., snow, frost, fog), sensor degradation (e.g., Gaussian noise, defocus blur), and many other reasons. Unfortunately, deep models can be sensitive to the above shifts and suffer from severe performance degradation even if the shift is mild (Recht et al., 2018) . However, deploying a deep model on test domains with distribution shifts is still an urgent demand, and model adaptation is needed in these cases. Recently, numerous test-time adaptation (TTA) methods (Sun et al., 2020; Wang et al., 2021; Iwasawa & Matsuo, 2021; Bartler et al., 2022) have been proposed to conquer the above domain shifts by online updating a model on the test data, which include two main categories, i.e., Test-Time Training (TTT) (Sun et al., 2020; Liu et al., 2021) and Fully TTA (Wang et al., 2021; Niu et al., 2022a) . In this work, we focus on Fully TTA since it is more generally to be used than TTT in two aspects: i) it does not alter training and can adapt arbitrary pre-trained models to the test data without access to original training data; ii) it may rely on fewer backward passes (only one or less than one) for each test sample than TTT (see efficiency comparisons of TTT, Tent and EATA in Table 6 ). TTA has been shown boost model robustness to domain shifts significantly. However, its excellent performance is often obtained under some mild test settings, e.g., adapting with a batch of test samples that have the same distribution shift type and randomly shuffled label distribution (see Figure 1 ➀ ). In the complex real world, test data may come arbitrarily. As shown in Figure 1 ➁, the test scenario may meet: i) mixture of multiple distribution shifts, ii) small test batch sizes (even single sample), iii) the ground-truth test label distribution Q t (y) is online shifted and Q t (y) may be imbalanced at each time-step t. In these wild test settings, online updating a model by existing TTA methods may be unstable, i.e., failing to help or even harming the model's robustness. , in which all model weights are frozen during testing. However, these methods cannot cumulatively exploit the knowledge of previous test samples to boost adaptation performance, and thus obtain limited results when there are lots of test samples. In addition, the diffusion model in DDA is expected to have good generalization ability and can project any possible target shifts to the source data. Nevertheless, this is hard to be satisfied as far as it goes, e.g., DDA performs well on noise shifts while less competitive on blur and weather (see Table 2 ). Thus, how to stabilize online TTA under wild test settings is still an open question. In this paper, we first point out that the batch norm (BN) layer (Ioffe & Szegedy, 2015) is a key obstacle since under the above wild scenarios the mean and variance estimation in BN layers will be biased. In light of this, we further investigate the effects of norm layers in TTA (see Section 4) and find that pre-trained models with batch-agnostic norm layers (i.e., group norm (GN) (Wu & He, 2018) and layer norm (LN) (Ba et al., 2016) ) are more beneficial for stable TTA. However, TTA on GN/LN models does not always succeed and still has many failure cases. Specifically, GN/LN models optimized by online entropy minimization (Wang et al., 2021) tend to occur collapse, i.e., predicting all samples to a single class (see Figure 2 ), especially when the distribution shift is severe. To address this issue, we propose a sharpness-aware and reliable entropy minimization method (namely SAR). Specifically, we find that indeed some noisy samples that produce gradients with large norms harm the adaptation and thus result in model collapse. To avoid this, we filter partial samples with large and noisy gradients out of adaptation according to their entropy. For the remaining samples, we introduce a sharpness-aware learning scheme to ensure that the model weights are optimized to a flat minimum, thereby being robust to the large and noisy gradients/updates. Main Findings and Contributions. (1) We analyze and empirically verify that batch-agnostic norm layers (i.e., GN and LN) are more beneficial than BN to stable test-time adaptation under wild test settings, i.e., mix domain shifts, small test batch sizes and online imbalanced label distribution shifts (see Figure 1 ). ( 2) We further address the model collapse issue of test-time entropy minimization on GN/LN models by proposing a sharpness-aware and reliable (SAR) optimization scheme, which jointly minimizes the entropy and the sharpness of entropy of those reliable test samples. SAR is simple yet effective and enables online test-time adaptation stabilized under wild test settings.

2. PRELIMINARIES

We revisit two main categories of test-time adaptation methods in this section for the convenience of further analyses, and put detailed related work discussions into Appendix A due to page limits.



Figure 1: An illustration of practical/wild test-time adaptation (TTA) scenarios, in which prior online TTA methods may degrade severely. The accuracy of Tent (Wang et al., 2021) is measured on ImageNet-C of level 5 with ResNet50-BN (15 mixed corruptions in (a) and Gaussian in (b-c)).

availability

//github.com/

