BENCHMARKS FOR DEEP OFF-POLICY EVALUATION

Abstract

Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. Being able to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between papers is difficult because currently there is a lack of a comprehensive and unified benchmark, and measuring algorithmic progress has been challenging due to the lack of difficult evaluation tasks. In order to address this gap, we present a collection of policies that in conjunction with existing offline datasets can be used for benchmarking off-policy evaluation. Our tasks include a range of challenging high-dimensional continuous control problems, with wide selections of datasets and policies for performing policy selection. The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform an evaluation of state-of-the-art algorithms and provide open-source access to our data and code to foster future research in this area † .

1. INTRODUCTION

Reinforcement learning algorithms can acquire effective policies for a wide range of problems through active online interaction, such as in robotics (Kober et al., 2013) , board games and video games (Tesauro, 1995; Mnih et al., 2013; Vinyals et al., 2019) , and recommender systems (Aggarwal et al., 2016) . However, this sort of active online interaction is often impractical for real-world problems, where active data collection can be costly (Li et al., 2010) , dangerous (Hauskrecht & Fraser, 2000; Kendall et al., 2019) , or time consuming (Gu et al., 2017) . Batch (or offline) reinforcement learning, has been studied extensively in domains such as healthcare (Thapa et al., 2005; Raghu et al., 2018 ), recommender systems (Dudík et al., 2014; Theocharous et al., 2015; Swaminathan et al., 2017 ), education (Mandel et al., 2014 ), and robotics (Kalashnikov et al., 2018) . A major challenge with such methods is the off-policy evaluation (OPE) problem, where one must evaluate the expected performance of policies solely from offline data. This is critical for several reasons, including providing high-confidence guarantees prior to deployment (Thomas et al., 2015) , and performing policy improvement and model selection (Bottou et al., 2013; Doroudi et al., 2017) . The goal of this paper is to provide a standardized benchmark for evaluating OPE methods. Although considerable theoretical (Thomas & Brunskill, 2016; Swaminathan & Joachims, 2015; Jiang & Li, 2015; Wang et al., 2017; Yang et al., 2020) and practical progress (Gilotte et al., 2018; Nie et al., 2019; Kalashnikov et al., 2018) on OPE algorithms has been made in a range of different domains, there are few broadly accepted evaluation tasks that combine complex, high-dimensional problems commonly explored by modern deep reinforcement learning algorithms (Bellemare et al., 2013; Brockman et al., 2016) with standardized evaluation protocols and metrics. Our goal is to provide a set of tasks with a range of difficulty, excercise a variety of design properties, and provide policies with different behavioral patterns in order to establish a standardized framework for comparing OPE algorithms. We put particular emphasis on large datasets, long-horizon tasks, and task complexity to facilitate the development of scalable algorithms that can solve high-dimensional problems. Our primary contribution is the Deep Off-Policy Evaluation (DOPE) benchmark. DOPE is designed to measure the performance of OPE methods by 1) evaluating on challenging control tasks with properties known to be difficult for OPE methods, but which occur in real-world scenarios, 2) evaluating across a range of policies with different values, to directly measure performance on policy evaluation, ranking and selection, and 3) evaluating in ideal and adversarial settings in terms of dataset coverage and support. These factors are independent of task difficulty, but are known to have a large impact on OPE performance. To achieve 1, we selected tasks on a set of design principles outlined in Section 3.1. To achieve 2, for each task we include 10 to 96 policies for evaluation and devise an evaluation protocol that measures policy evaluation, ranking, and selection as outlined in Section 3.2. To achieve 3, we provide two domains with differing dataset coverage and support properties described in Section 4. Finally, to enable an easy-to-use research platform, we provide the datasets, target policies, evaluation API, as well as the recorded results of state-of-the-art algorithms (presented in Section 5) as open-source. We briefly review the off-policy evaluation (OPE) problem setting. We consider Markov decision processes (MDPs), defined by a tuple (S, A, T , R, ρ 0 , γ), with state space S, action space A, transition distribution T (s |s, a), initial state distribution ρ 0 (s), reward function R(s, a) and discount factor γ ∈ (0, 1]. In reinforcement learning, we are typically concerned with optimizing or estimating the performance of a policy π(a|s).

2. BACKGROUND

The performance of a policy is commonly measured by the policy value V π , defined as the expected sum of discounted rewards: V π := E s0∼ρ0,s1:∞,a0:∞∼π ∞ t=0 γ t R(s t , a t ) . (1) If we have access to state and action samples collected from a policy π, then we can use the sample mean of observed returns to estimate the value function above. However, in off-policy evaluation we are typically interested in estimating the value of a policy when the data is collected from a separate behavior policy π B (a|s). This setting can arise, for example, when data is being generated online from another process, or in the purely offline case when we have a historical dataset. In this work we consider the latter, purely offline setting. The typical setup for this problem formulation is that we are provided with a discount γ, a dataset of trajectories collected from a behavior policy D = {(s 0 , a 0 , r 0 , s 1 , . . .)}, and optionally the action probabilities for the behavior policy π B (a t |s t ). In many practical applications, logging action propensities is not possible, for example, when the behavior policy is a mix of ML and hard-coded business logic. For this reason, we focus on the setting without propensities to encourage future work on behavior-agnostic OPE methods. For the methods that require propensities, we estimate the propensities with behavior cloning. The objective can take multiple flavors, as shown in Fig. 1 . A common task in OPE is to estimate the performance, or value, of a policy π (which may not be the same as π B ) so that the estimated



Figure 1: In Off-Policy Evaluation (top) the goal is to estimate the value of a single policy given only data. Offline Policy Selection (bottom) is a closely related problem: given a set of N policies, attempt to pick the best given only data.

availability

https://github.com/google-research/

