CONTRASTIVE NOVELTY LEARNING: ANTICIPATING OUTLIERS WITH LARGE LANGUAGE MODELS

Abstract

In many task settings, text classification models are likely to encounter examples from novel classes on which they cannot predict correctly. Selective prediction, in which models abstain on low-confidence examples, provides a possible solution, but existing models are often overly confident on OOD examples. To remedy this overconfidence, we introduce Contrastive Novelty Learning (CNL), a two-step method that generates OOD examples representative of novel classes, then trains to decrease confidence on them. First, we generate OOD examples by prompting a large language model twice: we prompt it to enumerate novel classes relevant to the label set, then generate examples from each novel class matching the task format. Second, we train our classifier with a novel contrastive objective that encourages lower confidence on generated OOD examples than training examples. When trained with CNL, classifiers improve in their ability to detect and abstain on OOD examples over prior methods by an average of 2.3% AUAC and 5.5% AUROC across 4 NLP datasets, with no cost to in-distribution accuracy. 1

1. INTRODUCTION

Recent progress in NLP has led to text classification models that are accurate not only in-distribution, but also on some out-of-domain data (Arora et al., 2021) . Nonetheless, some categories of realworld distribution shift still pose serious challenges. (Setlur et al., 2022) . Text inputs, however, are composed of discrete tokens, and modifying even a single token can unpredictably alter the meaning of a sentence. We seek an automatic generation method that addresses these limitations, leveraging the generative ability of large language models (LLMs) like GPT-3 (Brown et al., 2020) . LLMs are a desirable source for novelty, as their generation is informed by a broad corpus of examples seen during pretraining, allowing them to reliably generate from classes outside a dataset. We present Contrastive Novelty Learning (CNL), a method to improve the OSSC ability of a classifier by automatically generating OOD examples and then training to abstain on them. To generate a diverse set of OOD examples that anticipate different potential test-time shifts, we introduce Novelty Prompting, a method that augments a source dataset with novel class examples generated by a LLM. We first perform label generation, prompting our LLM to extend the closed-set labels with novel labels. We then prompt the LLM to generate new examples conditioned on each novel label



Code and data have been uploaded and will be released.1



For instance, in open-set label shift, the test data includes examples from novel classes not present in the training data, making it impossible for a standard classifier to predict correctly (Scheirer et al., 2013). Moreover, novel class examples can be difficult to detect with conventional OOD detection methods, as they typically bear a strong surface resemblance to training examples (T ¸ifrea et al., 2021). In this paper, we frame open-set label shift as a selective prediction problem (El-Yaniv & Wiener, 2010; Geifman & El-Yaniv, 2017) that we call open-set selective classification (OSSC). OSSC requires text classifiers to predict correctly on closed-set examples while abstaining on novel class examples. To perform well on OSSC, a classifier must have lower confidence on novel class examples than closed-set examples by learning features which differentiate novel classes from closed-set classes (Perera et al., 2020). In order to supervise this representation learning, it is useful to identify what examples from novel classes might look like. Prior work has explored automatically generating OOD images by adding random perturbations to ID examples

