Tailoring: ENCODING INDUCTIVE BIASES BY OPTIMIZING UNSUPERVISED OBJECTIVES AT PREDICTION TIME

Abstract

From CNNs to attention mechanisms, encoding inductive biases into neural networks has been a fruitful source of improvement in machine learning. Auxiliary losses are a general way of encoding biases in order to help networks learn better representations by adding extra terms to the loss function. However, since they are minimized on the training data, they suffer from the same generalization gap as regular task losses. Moreover, by changing the loss function, the network is optimizing a different objective than the one we care about. In this work we solve both problems: first, we take inspiration from transductive learning and note that, after receiving an input but before making a prediction, we can fine-tune our models on any unsupervised objective. We call this process tailoring, because we customize the model to each input. Second, we formulate a nested optimization (similar to those in meta-learning) and train our models to perform well on the task loss after adapting to the tailoring loss. The advantages of tailoring and meta-tailoring are discussed theoretically and demonstrated empirically on several diverse examples: encoding inductive conservation laws from physics, increasing robustness to adversarial examples, meta-tailoring with contrastive losses to improve theoretical generalization guarantees, and increasing performance in model-based RL.

1. INTRODUCTION

The key to successful generalization in machine learning is the encoding of useful inductive biases. A variety of mechanisms, from parameter tying to data augmentation, have proven useful but there is no systematic strategy for designing and implementing these biases. Auxiliary losses are a paradigm for encoding a wide variety of biases, constraints and objectives, helping networks learn better representations and generalize more broadly. They add an extra term to the task loss and minimize it over the training data or, in semi-supervised learning, on an extra set of unlabeled data. However, they have two major difficulties: 1. Auxiliary losses are only minimized at training time, but not for the query points. This causes a generalization gap between training and testing, in addition to that of the task loss. 2. By minimizing the sum of the task loss plus the auxiliary loss, we are optimizing a different objective than the one we care about (only the task loss). In this work we propose a solution to each problem: 1. We use ideas from transductive learning to minimize the auxiliary loss at the query by running an optimization at prediction time, eliminating the generalization gap for the auxiliary loss. We call this process tailoring, because we customize the model to each query. 2. We use ideas from meta-learning to learn a model that performs well on the task loss assuming that we will be optimizing the auxiliary loss. This meta-tailoring effectively trains the model to leverage the unsupervised tailoring loss to minimize the task loss. Tailoring a predictor In classical inductive supervised learning, an algorithm consumes a training dataset of input-output pairs, ((x i , y i )) n i=1 , and produces a set of parameters θ by minimizing a supervised loss n i=1 L sup (f θ (x i ), y i ) and, optionally, an unsupervised auxiliary loss n i=1 L unsup (θ, x i ). These parameters specify a hypothesis f θ (•) that, given a new input x, generates an output ŷ = f θ (x). This problem setting misses a substantial opportunity: before the learning algorithm sees the query

