A FRAMEWORK FOR BENCHMARKING CLASS-OUT-OF-DISTRIBUTION DETECTION AND ITS APPLICATION TO IMAGENET

Abstract

When deployed for risk-sensitive tasks, deep neural networks must be able to detect instances with labels from outside the distribution for which they were trained. In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i.e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty. We apply this technique to ImageNet, and benchmark 525 pretrained, publicly available, ImageNet-1k classifiers. The code for generating a benchmark for any ImageNet-1k classifier, along with the benchmarks prepared for the above-mentioned 525 models is available at https://github.com/mdabbah/ COOD benchmarking. The usefulness of the proposed framework and its advantage over alternative existing benchmarks is demonstrated by analyzing the results obtained for these models, which reveals numerous novel observations including: (1) knowledge distillation consistently improves class-out-of-distribution (C-OOD) detection performance; (2) a subset of ViTs performs better C-OOD detection than any other model; (3) the language-vision CLIP model achieves good zero-shot detection performance, with its best instance outperforming 96% of all other models evaluated; (4) accuracy and in-distribution ranking are positively correlated to C-OOD detection; and (5) we compare various confidence functions for C-OOD detection. Our companion paper, also published in ICLR 2023 (Galil et al., 2023), examines the uncertainty estimation performance (ranking, calibration, and selective prediction performance) of these classifiers in an in-distribution setting.

1. INTRODUCTION

Deep neural networks (DNNs) show great performance in a wide variety of application domains including computer vision, natural language understanding and audio processing. These models are trained on data coming from a certain distribution P (X, Y ), usually with the assumption that test points will be sampled from the same distribution. When the underlying distribution P (X, Y ) of test points is different from the one used to train a model, we may no longer expect the same performance from the model. The difference in distribution may be the result of many processes such as natural deviation in the input space X , noisy sensor readings of inputs, abrupt changes due to random events, newly arrived or refined input classes, etc. Here we distinguish between input distributional changes in P X|Y and changes in the label distribution. We focus on the latter case and consider the class-out-of-distribution (C-OOD) scenario, AKA open-set recognition (Scheirer et al., 2013) , where the label support set Y changes to a different set that includes the set Y OOD , containing new classes not observed in training. Consider the detection task in which our model is required to distinguish between samples belonging to classes it has seen in training, where x ∼ P (x|y ∈ Y ID ), and samples belonging to novel classes, i.e., x ∼ P (x|y ∈ Y OOD ). The question we now ask is: how should models be evaluated to most accurately reflect their detection performance? We aim to benchmark the detection performance *The first two authors have equal contribution. 1

