BENCHMARKS FOR DEEP OFF-POLICY EVALUATION

Abstract

Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. Being able to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between papers is difficult because currently there is a lack of a comprehensive and unified benchmark, and measuring algorithmic progress has been challenging due to the lack of difficult evaluation tasks. In order to address this gap, we present a collection of policies that in conjunction with existing offline datasets can be used for benchmarking off-policy evaluation. Our tasks include a range of challenging high-dimensional continuous control problems, with wide selections of datasets and policies for performing policy selection. The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform an evaluation of state-of-the-art algorithms and provide open-source access to our data and code to foster future research in this area † .

1. INTRODUCTION

Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction, such as in robotics (Kober et al., 2013) , board games and video games (Tesauro, 1995; Mnih et al., 2013; Vinyals et al., 2019) , and recommender systems (Aggarwal et al., 2016) . However, this sort of active online interaction is often impractical for real-world problems, where active data collection can be costly (Li et al., 2010 ), dangerous (Hauskrecht & Fraser, 2000; Kendall et al., 2019) , or time consuming (Gu et al., 2017) . Batch (or offline) reinforcement learning, has been studied extensively in domains such as healthcare (Thapa et al., 2005; Raghu et al., 2018) , recommender systems (Dudík et al., 2014; Theocharous et al., 2015; Swaminathan et al., 2017 ), education (Mandel et al., 2014 ), and robotics (Kalashnikov et al., 2018) . A major challenge with such methods is the off-policy evaluation (OPE) problem, where one must evaluate the expected performance of policies solely from offline data. This is critical for several reasons, including providing high-confidence guarantees prior to deployment (Thomas et al., 2015) , and performing policy improvement and model selection (Bottou et al., 2013; Doroudi et al., 2017) . The goal of this paper is to provide a standardized benchmark for evaluating OPE methods. Although considerable theoretical (Thomas & Brunskill, 2016; Swaminathan & Joachims, 2015; Jiang & Li, 2015; Wang et al., 2017; Yang et al., 2020) and practical progress (Gilotte et al., 2018; Nie et al., 2019; Kalashnikov et al., 2018) on OPE algorithms has been made in a range of different domains, there are few broadly accepted evaluation tasks that combine complex, high-dimensional problems

availability

https://github.com/google-research/

