TEST-TIME RECALIBRATION OF CONFORMAL PREDIC-TORS UNDER DISTRIBUTION SHIFT BASED ON UNLA-BELED EXAMPLES

Abstract

Modern image classifiers achieve high predictive accuracy, but the predictions typically come without reliable uncertainty estimates. Conformal prediction algorithms provide uncertainty estimates by predicting a set of classes based on the probability estimates of the classifier (for example, the softmax scores). To provide such sets, conformal prediction algorithms often rely on estimating a cutoff threshold for the probability estimates, and this threshold is chosen based on a calibration set. Conformal prediction methods guarantee reliability only when the calibration set is from the same distribution as the test set. Therefore, the methods need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible. In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples only. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we show that our method provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift.

1. INTRODUCTION

Consider a (black-box) image classifier that is trained on a dataset to output probability estimates for L classes given an input feature vector x ∈ R d . This classifier is typically a deep neural network with a softmax layer at the end. Conformal prediction algorithms are wrapped around such a black-box classifier to generate a set of classes that contain the correct label with a user-specified desired probability based on the output probability estimates. Let x ∈ R d be a feature vector with associated label y ∈ {1, . . . , L}. We say that a set-valued function C generates valid prediction sets for the distribution P if P (x,y)∼P [y ∈ C(x)] ≥ 1 -α, where 1α is the desired coverage level. Conformal prediction methods generate valid set generating functions by utilizing a calibration set consisting of labeled examples {(x 1 , y 1 ), . . . , (x n , y n )} drawn from the distribution P. An important caveat of conformal prediction methods is that they assume that the examples from the calibration set and the test set are exchangeable, i.e., samples are identically distributed, or more broadly, are invariant to permutations across the two sets. The exchangeability assumption is difficult to satisfy and verify in applications and potentially limits the applicability of conformal prediction methods in practice. In fact, in practice one usually expects a distribution shift between the calibration set and the examples at inference (or the test set), in which case the coverage guarantees provided by conformal prediction methods are void. For example, the new CIFAR-10.1 and ImageNetV2 test sets were created in the same way as the original CIFAR-10 and ImageNet test sets, yet Recht et al. ( 2019) found a notable drop in classification accuracy for all classifiers considered. Ideally, a conformal predictor is recalibrated on a distribution before testing, otherwise the coverage guarantees are not valid (Cauchois et al., 2020) . However, in real-world applications, while distribution shifts are ubiquitous, labeled data from new distributions is scarce or non-existent.

