GENERALIZATION PROPERTIES OF RETRIEVAL-BASED MODELS Anonymous authors Paper under double-blind review

Abstract

Many modern high-performing machine learning models such as GPT-3 primarily rely on scaling up models, e.g., transformer networks. Simultaneously, a parallel line of work aims to improve the model performance by augmenting an input instance with other (labeled) instances during inference. Examples of such augmentations include task-specific prompts and similar examples retrieved from the training data by a nonparametric component. Remarkably, retrieval-based methods have enjoyed success on a wide range of problems, ranging from standard natural language processing and vision tasks to protein folding, as demonstrated by many recent efforts, including WebGPT and AlphaFold. Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored. In this paper, we present a formal treatment of retrieval-based models to characterize their generalization ability. In particular, we focus on two classes of retrieval-based classification approaches: First, we analyze a local learning framework that employs an explicit local empirical risk minimization based on retrieved examples for each input instance. Interestingly, we show that breaking down the underlying learning task into local sub-tasks enables the model to employ a low complexity parametric component to ensure good overall accuracy. The second class of retrieval-based approaches we explore learns a global model using kernel methods to directly map an input instance and retrieved examples to a prediction, without explicitly solving a local learning task.

1. INTRODUCTION

As our world is complex, we need expressive machine learning models to make high accuracy predictions on real world problems. There are multiple ways to increase expressiveness of a machine learning model. A popular way is to homogeneously scale the size of a parametric model, such as neural networks, which has been behind many recent high-performance models such as GPT-3 (Brown et al., 2020) and ViT (Dosovitskiy et al., 2021) . Their performance (accuracy) exhibits a monotonic behavior with increasing model size, as demonstrated by "scaling laws" (Kaplan et al., 2020) . Such large models, however, have their own limitations, including high computation cost, catastrophic forgeting (hard to adapt to changing data), lack of provenance, and explanability. Classical instancebased models Fix & Hodges (1989) , on the other hand, offer many desirable properties by designefficient data structures, incremental learning (easy addition and deletion of knowledge), and some provenance for its prediction based on the nearest neighbors w.r.t. the input. However, these models often suffer from weaker empirical performance as compared to deep parametric models. Increasingly, a middle ground combining the two paradigms and retaining the best of both worlds is becoming popular across various domains, ranging from natural language (Das et al., 2021; Wang et al., 2022; Liu et al., 2022; Izacard et al., 2022 ), to vision (Liu et al., 2015; 2019; Iscen et al., 2022; Long et al., 2022) , to reinforcement learning (Blundell et al., 2016; Pritzel et al., 2017; Ritter et al., 2020) , to even protein structure predictions (Cramer, 2021) . In such approaches, given a test input, one first retrieves relevant entries from a data index and then processes the retrieved entries along with the test input to make the final predictions using a machine learning model. This process is visualized in Figure 1b . For example, in semantic parsing, models that augment a parametric seq2seq model with similar examples have not only outperformed much larger models but also are more robust to changes in data (Das et al., 2021) . While classical learning setups (cf. Figure 1a ) have been studied extensively over decades, even basic properties and trade-offs pertaining to retrieval-based models (cf. Figure 1b ), despite their 1

