TEST-TIME RECALIBRATION OF CONFORMAL PREDIC-TORS UNDER DISTRIBUTION SHIFT BASED ON UNLA-BELED EXAMPLES

Abstract

Modern image classifiers achieve high predictive accuracy, but the predictions typically come without reliable uncertainty estimates. Conformal prediction algorithms provide uncertainty estimates by predicting a set of classes based on the probability estimates of the classifier (for example, the softmax scores). To provide such sets, conformal prediction algorithms often rely on estimating a cutoff threshold for the probability estimates, and this threshold is chosen based on a calibration set. Conformal prediction methods guarantee reliability only when the calibration set is from the same distribution as the test set. Therefore, the methods need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible. In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples only. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we show that our method provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift.

1. INTRODUCTION

Consider a (black-box) image classifier that is trained on a dataset to output probability estimates for L classes given an input feature vector x ∈ R d . This classifier is typically a deep neural network with a softmax layer at the end. Conformal prediction algorithms are wrapped around such a black-box classifier to generate a set of classes that contain the correct label with a user-specified desired probability based on the output probability estimates. Let x ∈ R d be a feature vector with associated label y ∈ {1, . . . , L}. We say that a set-valued function C generates valid prediction sets for the distribution P if P (x,y)∼P [y ∈ C(x)] ≥ 1 -α, where 1α is the desired coverage level. Conformal prediction methods generate valid set generating functions by utilizing a calibration set consisting of labeled examples {(x 1 , y 1 ), . . . , (x n , y n )} drawn from the distribution P. An important caveat of conformal prediction methods is that they assume that the examples from the calibration set and the test set are exchangeable, i.e., samples are identically distributed, or more broadly, are invariant to permutations across the two sets. The exchangeability assumption is difficult to satisfy and verify in applications and potentially limits the applicability of conformal prediction methods in practice. In fact, in practice one usually expects a distribution shift between the calibration set and the examples at inference (or the test set), in which case the coverage guarantees provided by conformal prediction methods are void. For example, the new CIFAR-10.1 and ImageNetV2 test sets were created in the same way as the original CIFAR-10 and ImageNet test sets, yet Recht et al. ( 2019) found a notable drop in classification accuracy for all classifiers considered. Ideally, a conformal predictor is recalibrated on a distribution before testing, otherwise the coverage guarantees are not valid (Cauchois et al., 2020) . However, in real-world applications, while distribution shifts are ubiquitous, labeled data from new distributions is scarce or non-existent.

annex

We therefore consider the problem of recalibrating a conformal predictor only based on unlabeled data from the new domain. This is an ill-posed problem: it is in general impossible to calibrate a conformal predictor based on unlabeled data. Yet, we propose a simple calibration method that gives excellent performance for a variety of natural distribution shifts.Organization and contributions. We start with concrete examples on how conformal predictors yield miscalibrated uncertainty estimates under natural distribution shifts. We next propose a simple recalibration method that only uses unlabeled examples from the target distribution. We show that our method correctly recalibrates a popular conformal predictor (Sadinle et al., 2019) on a theoretical toy model. We provide empirical results for various natural distribution shifts of ImageNet showing that recalibrating conformal predictors using our proposed method significantly reduces the performance gap. In certain cases, it even achieves near oracle-level coverage. 2022) considers the problem of estimating the covariate shift from unlabeled target data, and aims at constructing PAC prediction sets instead of the standard unconditionally valid prediction sets. In contrast, we focus on complex image datasets for which covariate shift is not well defined. In Appendix B, we provide a comparison of our method to the above covariate shift based methods for a setting where we have access to labeled examples from multiple domains during training/calibration, one of which correspond to the target distribution.We are not aware of other works studying calibration of conformal predictors under distribution shift based on unlabeled examples. However, prior works propose to make conformal predictors robust to various distribution shifts from the source distribution of the calibration set (Cauchois et al., 2020; Gendler et al., 2022) , via calibrating the conformal predictor to achieve a desired coverage in the worse case scenario of the considered distribution shifts. Cauchois et al. ( 2020) considers covariate shifts and calibrates the conformal predictor to achieve coverage for the worst-case distribution within the f -divergence ball of the source distribution. Gendler et al. ( 2022) considers adversarial perturbations as distribution shifts and calibrates a conformal predictor to achieve coverage for the worst-case distribution obtained through 2 -norm bounded adversarial noise.While making the conformal predictor robust to a range of worst-case distributions at calibration time allows maintaining coverage worst-case distributions, this approaches has two shortcomings: 1. Natural distribution shifts are difficult to capture mathematically, and models like covariate-shifts or adversarial perturbations do not seem to model natural distribution shifts (such as that from ImageNet to ImageNetV2) accurately. 2. Calibrating for a worst-case scenario results in an overly conservative conformal predictor that tends to yield much higher coverage than desired for test distributions that correspond to a less severe shift from the source, which comes at the cost of reduced efficiency (i.e., larger set size, or larger confidence interval length).In contrast, our method does not compromise the efficiency of the conformal predictor on easier distributions as we recalibrate the conformal predictor separately for any new dataset.A related problem is to predict the accuracy of a classifier on new distributions from unlabeled data sampled from the new distribution (Deng & Zheng, 2021; Chen et al., 2021; Jiang et al., 2021; Deng et al., 2021; Guillory et al., 2021; Garg et al., 2022) . In particular, Garg et al. ( 2022) proposed a simple method that achieves state-of-the-art performance in predicting classifier accuracy across a range of distributions. However, the calibration problem we consider is fundamentally different than estimating the accuracy of a classifier. While predicting the accuracy of the classifier would allow making informed decisions on whether to use the classifier for a new distribution, it doesn't provide a solution to recalibrate.

2. CONFORMAL PREDICTION AND PROBLEM STATEMENT

We start by introducing conformal prediction and our problem setup.Conformal prediction. Consider a black-box classifier with input feature vector x ∈ R d that outputs a probability estimate π (x) ∈ [0, 1] for each class = 1, . . . , L. Typically, the classifier is a

