A FRAMEWORK FOR BENCHMARKING CLASS-OUT-OF-DISTRIBUTION DETECTION AND ITS APPLICATION TO IMAGENET

Abstract

When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty. We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers. The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/ COOD benchmarking. The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including: (1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language-vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming 96% of all other models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and (5) we compare various confidence functions for C-OOD detection. Our companion paper, also published in ICLR 2023 (Galil et al., 2023), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting.

1. INTRODUCTION

Deep neural networks (DNNs) show great performance in a wide variety of application domains including computer vision, natural language understanding and audio processing. These models are trained on data coming from a certain distribution P (X, Y ), usually with the assumption that test points will be sampled from the same distribution. When the underlying distribution P (X, Y ) of test points is different from the one used to train a model, we may no longer expect the same performance from the model. The difference in distribution may be the result of many processes such as natural deviation in the input space X , noisy sensor readings of inputs, abrupt changes due to random events, newly arrived or refined input classes, etc. Here we distinguish between input distributional changes in P X|Y and changes in the label distribution. We focus on the latter case and consider the class-out-of-distribution (C-OOD) scenario, AKA open-set recognition (Scheirer et al., 2013) , where the label support set Y changes to a different set that includes the set Y OOD , containing new classes not observed in training. Consider the detection task in which our model is required to distinguish between samples belonging to classes it has seen in training, where x ∼ P (x|y ∈ Y ID ), and samples belonging to novel classes, i.e., x ∼ P (x|y ∈ Y OOD ). The question we now ask is: how should models be evaluated to most accurately reflect their detection performance? We aim to benchmark the detection performance of DNN classification models that use their confidence rate function κ (e.g., softmax response; see Section 2) to detect OOD labels, where the basic premise is that instances whose labels are in Y OOD are assigned lower κ values. Most works on OOD detection use small-scale datasets that generally do not resemble the training distribution and, therefore, are easy to detect. The use of such sets often causes C-OOD detectors to appear better than they truly are when faced with realistic, yet harder tasks. Motivated by this deficiency, Hendrycks et al. ( 2021) introduced the ImageNet-O dataset as a solution. ImageNet-O, however, has two limitations. First, it benchmarks models with a single difficulty level exclusively, having only hard C-OOD instances, which might not be relevant for every task's requirements (Section 3 explains how to define different difficulty levels). Second, the original intent in the creation of ImageNet-O was to include only hard C-OOD instances. Its definition of "OOD hardness", however, was carried out with respect to ResNet-50's difficulty in detecting C-OOD classes, specifically when using softmax as its confidence function. This property makes ImageNet-O strongly biased. Indeed, consider the right-most box in Figure 1 , which corresponds to the performance of 525 models over ImageNet-O. The orange dot in that box corresponds to ResNet-50, whose OOD detection performance is severely harmed by these ImageNet-O data. Nevertheless, it is evident that numerous models perform quite well, and all other models perform better than ResNet-50. The lack of an objective benchmark for C-OOD is the main motivation for our work. 

Models:

ViT-L/32-384

Severity Levels C-OOD AUROC (detection)

Figure 1 : OOD performance across severity (difficulty) levels, using the benchmarks produced by our framework. The detection performance decreases for all models as we increase the difficulty until it reaches near chance detection performance at the highest severity (s ). The top curve belongs to ViT-L/32-384, which surpasses all models at every severity level. We also observe how success or failure with regard to the previous C-OOD benchmark, ImageNet-O, does not reflect the models' true OOD detection performance since it was designed to specifically fool ResNet-50. At the bottom we provide visual examples for OOD classes from ImageNet-21k that may populate each severity level due to their similarity to ID classes from ImageNet-1k, and in this example, to a Monarch butterfly. Our contributions. We propose a novel technique to generate a C-OOD benchmark that covers a variety of difficulty levels. Unlike other existing benchmarks (e.g., ImageNet-O), our technique is not biased towards an arbitrary model such as Resnet50 and/or a specific confidence function such as the softmax response. This useful property is obtained by tailoring the benchmark to the model being evaluated, including its confidence function, and not seeking to determine a single objective criterion for hardness of C-OOD samples (see Section 3). Second, we show and explain how we filter ImageNet-21k to use it for the purpose of generating C-OOD benchmarks for ImageNet-1k (Deng et al., 2009) classifiers (see Section 4). We will provide



*The first two authors have equal contribution.

