ON THE SENSITIVITY OF REWARD INFERENCE TO MISSPECIFIED HUMAN MODELS

Abstract

Inferring reward functions from human behavior is at the center of value alignment -aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.

1. INTRODUCTION

The expanding interest in the area of reward learning stems from the concern that it is difficult (or even impossible) to specify what we actually want AI agents to optimize, when it comes to increasingly complex, real-world tasks (Ziebart et al., 2009; Muelling et al., 2017) . At the core of reward learning is the idea that human behavior serves as evidence about the underlying desired objective. Research on inferring rewards typically uses noisy-rationality as a model for human behavior: the human will take higher value actions with higher probability. It has enjoyed great success in a variety of reward inference applications (Ziebart et al., 2008; Vasquez et al., 2014; Wulfmeier et al., 2015) , but researchers have also started to come up against its limitations (Reddy et al., 2018) . This is not surprising, given decades of research in behavioral economics that has identified a deluge of systematic biases people have when making decisions on how to act, like myopia/hyperbolic discounting (Grüne-Yanoff, 2015), optimism bias (Sharot et al., 2007) , prospect theory (Kahneman & Tversky, 2013) , and many more (Thompson, 1999; Do et al., 2008) . Hence, the noisy-rationality model has become a complication in many reward learning tasks AI researchers are interested in. For instance, in shared autonomy (Javdani et al., 2015) , a human operating a robotic arm may behave suboptimally due to being unfamiliar with the control interface or the robot's dynamics, leading to the robot inferring the wrong goal (Reddy et al., 2014; Chan et al., 2021) . Recent work in reward learning attempts to go beyond noisy rationality and consider more accurate models of human behavior, by for instance looking at biases as variations on the Bellman update (Chan et al., 2021) , modeling the human's false beliefs (Reddy et al., 2018) , or learning their suboptimal perception process (Reddy et al., 2020) . And while we might be getting closer, we will realistically never have a perfect model of human behavior. This raises an obvious question: Does the human model need to be perfect in order for reward inference to be successful? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, especially as it applies to value alignment: we will never have perfect models, and we will therefore never have guarantees that the agent does not do something catastrophically bad with respect to what people actually value. On the other hand, if we can show that as our models improve, we have a guarantee that reward accuracy also improves, then there is hope: though modeling human behavior is difficult, we know that improving such models will make AI agents more aligned with us. The main goal of this work is to study whether we can bound the error in inferred reward parameters by some function of the distance between the assumed and true human model, specifically the KL divergence between the two models. We study this question both theoretically and empirically. Our first result is a negative answer: we show that given a finite dataset of demonstrations, it is possible to hypothesize a true human model, under which the dataset is most likely to be generated, that is "close" to the assumed model, but results in arbitrarily large error in the reward we would infer via maximum likelihood estimation (MLE). However, we argue that though this negative scenario can arise, it is unlikely to occur in practice. This is because the result relies on an adversarial construction of the human model, and though the dataset is most likely, it is not necessarily representative of datasets sampled by the model. Given this, our main result is thus a reason for hope: we identify mild assumptions on the true human behavior, under which we can actually bound the error in inferred reward parameters linearly by the error of the human model. Thus, if these assumptions hold, refining the human model will monotonically improve the accuracy of the learned reward. We also show how this bound simplifies for particular biases like false internal dynamics or myopia. Empirically, we also show a similar, optimistic message about reward learning, using both diagnostic gridworld domains (Fu et al., 2019b) , as well as the Lunar Lander game, which involves continuous control over a continuous state space. First, we verify that under various simulated biases, when the conditions on the human model are likely to be satisfied, small divergences in human models do not lead to large reward errors. Second, using real human demonstration data, we derive a natural human bias and demonstrate that the same finding holds even with real humans. Overall, our results suggest an optimistic perspective on the framework of reward learning, and that efforts in improving human models will further enhance the quality of the inferred rewards.

2. RELATED WORK

Inverse reinforcement learning (IRL) aims to use expert demonstrations, often from a human, to infer a reward function (Ng & Russel, 2000; Ziebart et al., 2008) . Maximum-entropy (MaxEnt) IRL is a popular IRL framework that models the demonstrator as noisily optimal, maximizing reward while also randomising actions as much as possible (Ziebart et al., 2008; 2010) . This is equivalent to modeling humans as Boltzmann rational. MaxEnt IRL is preferred in practice over Bayesian IRL (Ramachandran & Amir, 2007) , which learns a posterior over reward functions rather than a point estimate, due to better scaling in high-dimensional environments (Wulfmeier et al., 2015) . More recently, Guided Cost Learning (Finn et al., 2016) and Adversarial IRL (Fu et al., 2018) learn reward functions more robust to environment changes, but build off similar modeling assumptions as MaxEnt IRL. Gleave & Toyer (2022) connected MaxEnt IRL to maximum likelihood estimation (MLE), which is the framework that we consider in this work. One of the challenges with IRL is that rewards are not always uniquely identified from expert demonstrations (Cao et al., 2021; Kim et al., 2021) . Since identifiability is orthogonal to the main message of our work-sensitivity to misspecified human models-we assume that the dataset avoids this ambiguity. Recent IRL algorithms attempt to account for possible irrationalities in the expert (Evans et al., 2016; Reddy et al., 2018; Shah et al., 2019) . Reddy et al. (2018; 2020) consider when experts behave according to an internal physical and belief dynamics, and show that explicitly learning these dynamics improves accuracy of the learned reward. Singh et al. (2018) account for human risk sensitivity when learning the reward. Shah et al. (2019) propose learning general biases using demonstrations across similar tasks, but conclude that doing so without prior knowledge is difficult. Finally, Chan et al. (2021) show that knowing the type of irrationality the expert exhibits can improve reward inference over even an optimal expert. In this work, we do not assume the bias can be uncovered, but rather analyze how sensitive reward inference is to such biases. More generally, reward learning is a specific instantiation of an inverse problem, which is well-studied in existing literature. In the framework of Bayesian inverse problems, prior work has analyzed how misspecified likelihood models affect the accuracy of the inferred quantity when performing Bayesian inference. Owhadi et al. (2015) showed that two similar models can lead to completely opposite inference of the desired quantity. Meanwhile, Sprungk (2020) showed inference is stable under a

