ON THE SENSITIVITY OF REWARD INFERENCE TO MISSPECIFIED HUMAN MODELS

Abstract

Inferring reward functions from human behavior is at the center of value alignment -aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.

1. INTRODUCTION

The expanding interest in the area of reward learning stems from the concern that it is difficult (or even impossible) to specify what we actually want AI agents to optimize, when it comes to increasingly complex, real-world tasks (Ziebart et al., 2009; Muelling et al., 2017) . At the core of reward learning is the idea that human behavior serves as evidence about the underlying desired objective. Research on inferring rewards typically uses noisy-rationality as a model for human behavior: the human will take higher value actions with higher probability. It has enjoyed great success in a variety of reward inference applications (Ziebart et al., 2008; Vasquez et al., 2014; Wulfmeier et al., 2015) , but researchers have also started to come up against its limitations (Reddy et al., 2018) . This is not surprising, given decades of research in behavioral economics that has identified a deluge of systematic biases people have when making decisions on how to act, like myopia/hyperbolic discounting (Grüne-Yanoff, 2015), optimism bias (Sharot et al., 2007) , prospect theory (Kahneman & Tversky, 2013), and many more (Thompson, 1999; Do et al., 2008) . Hence, the noisy-rationality model has become a complication in many reward learning tasks AI researchers are interested in. For instance, in shared autonomy (Javdani et al., 2015) , a human operating a robotic arm may behave suboptimally due to being unfamiliar with the control interface or the robot's dynamics, leading to the robot inferring the wrong goal (Reddy et al., 2014; Chan et al., 2021) . Recent work in reward learning attempts to go beyond noisy rationality and consider more accurate models of human behavior, by for instance looking at biases as variations on the Bellman update (Chan et al., 2021) , modeling the human's false beliefs (Reddy et al., 2018) , or learning their suboptimal perception process (Reddy et al., 2020) . And while we might be getting closer, we will realistically never have a perfect model of human behavior. This raises an obvious question: Does the human model need to be perfect in order for reward inference to be successful? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, especially as it applies to value alignment: we will never have perfect models, and we will therefore never have guarantees that the agent does not do something catastrophically bad with * Work conducted as student at UC Berkeley

