ADVANTAGE-WEIGHTED REGRESSION: SIMPLE AND SCALABLE OFF-POLICY REINFORCEMENT LEARNING

Abstract

In this work, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters. (Video 1 )

1. INTRODUCTION

Model-free reinforcement learning can be a general and effective methodology for training agents to acquire sophisticated behaviors with minimal assumptions on the underlying task. However, RL algorithms can be substantially more complex to implement and tune than standard supervised learning methods. Arguably the simplest reinforcement learning methods are policy gradient algorithms (Sutton et al., 2000) , which directly differentiate the expected return and perform gradient ascent. Unfortunately, these methods can be notoriously unstable and are typically on-policy, often requiring a substantial number of samples to learn effective behaviors. Our goal is to develop an RL algorithm that is simple, easy to implement, and can readily incorporate off-policy data. In this work, we propose advantage-weighted regression (AWR), a simple off-policy algorithm for model-free RL. Each iteration of the AWR algorithm simply consists of two supervised regression steps: one for training a value function baseline via regression onto cumulative rewards, and another for training the policy via weighted regression. The complete algorithm is shown in Algorithm 1. AWR can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. Despite its simplicity, we find that AWR achieves competitive results when compared to commonly used on-policy and off-policy RL algorithms, and can effectively incorporate fully off-policy data, which has been a challenge for other RL algorithms. Our derivation presents an interpretation of AWR as a constrained policy optimization procedure, and provides a theoretical analysis of the use of off-policy data with experience replay. We first revisit the original formulation of reward-weighted regression (RWR) (Peters & Schaal, 2007) , an on-policy RL method that utilizes supervised learning to perform policy updates, and then propose a number of new design decisions that significantly improve performance on a suite of standard control benchmark tasks. We then provide a theoretical analysis of AWR, including the capability to incorporate off-policy data with experience replay. Although the design of AWR involves only a few simple design decisions, we show experimentally that these additions provide for a large improvement over previous methods for regression-based policy search, such as RWR, while also being substantially simpler than more modern methods, such as MPO (Abdolmaleki et al., 2018b) . We show that AWR achieves competitive performance when compared to several well-established state-of-the-art on-policy and off-policy algorithms.

2. PRELIMINARIES

In reinforcement learning, the objective is to learn a policy that maximizes an agent's expected return. At each time step t, the agent observes the state of the environment s t , and samples an action from a policy a t ∼ π(a t |s t ). The agent then applies that action, which results in a new state s t+1 and a scalar reward r t = r(s t , a t ). The goal is to learn a policy that maximizes the expected return J(π), J(π) = E τ ∼pπ(τ ) ∞ t=0 γ t r t = E s∼dπ(s),a∼π(a|s) [r(s, a)] , where p π (τ ) represents the likelihood of a trajectory τ = {(s 0 , a 0 , r 0 ) , (s 1 , a 1 , r 1 ) , ...} under a policy π, and γ ∈ [0, 1) is the discount factor. d π (s) = ∞ t=0 γ t p(s t = s|π) represents the unnormalized discounted state distribution induced by the policy π (Sutton & Barto, 1998), and p(s t = s|π) is the likelihood of the agent being in state s after following π for t timesteps. Our proposed AWR algorithm builds on ideas from reward-weighted regression (RWR) (Peters & Schaal, 2007) , a policy search algorithm based on an expectation-maximization framework. At each iteration, the E-step constructs an estimate of the optimal policy according to π * (a|s) ∝ π k (a|s)exp (R s,a /β), where π k represents the policy at the kth iteration, R s,a = ∞ t=0 γ t r t is the return, and β > 0 is a temperature parameter. Then the M-step projects π * onto the space of parameterized policies by solving a supervised regression problem: π k+1 = arg max π E s∼dπ k (s) E a∼π k (a|s) log π(a|s) exp 1 β R s,a . (2) The RWR update can be interpreted as fitting a new policy π k+1 to samples from the current policy π k , where the likelihood of each action is weighted by the exponentiated return for that action.

3. ADVANTAGE-WEIGHTED REGRESSION

In this work, we present advantage-weighted regression (AWR), a simple off-policy RL algorithm based on reward-weighted regression. We first provide an overview of the AWR algorithm, and then describe its theoretical motivation and analyze its properties. The AWR algorithm is summarized in Algorithm 1. Each iteration k of AWR consists of the following simple steps. First, the current policy π k (a|s) is used to sample a batch of trajectories {τ i } that are then stored in the replay buffer D, which is structured as a first-in first-out (FIFO) queue (Mnih et al., 2015) . Then, a value function V D k (s) is fitted to all trajectories in the replay buffer D, which can be done with simple Monte Carlo return estimates R D s,a = T t=0 γ t r t . Finally, the same buffer is used to fit a new policy using advantage-weighted regression, where each state-action pair in the buffer is weighted according to the exponentiated advantage exp( 1 β A D (s, a)), with the advantage given by A D (s, a) = R D s,a -V D (s) and β is a hyperparameter. In the following subsections, we first motivate AWR as a constrained policy search problem, and then extend our analysis to incorporate experience replay.

3.1. DERIVATION

In this section, we derive the AWR algorithm as an approximate optimization of a constrained policy search problem. Our goal is to find a policy that maximizes the expected improvement η(π) = J(π) -J(µ) over a sampling policy µ(a|s). We first derive AWR for the setting where the sampling policy is a single Markovian policy. Then, in the next section, we extend our result to data from multiple policies, as in the case of experience replay. The expected improvement η(π) can be expressed in terms of the advantage A µ (s, a) = R µ s,a -V µ (s) with respect to µ (Kakade & Langford, 2002; Schulman et al., 2015) : η(π) = E s∼dπ(s) E a∼π(a|s) R µ s,a -V µ (s) , where R µ s,a denotes the return obtained by performing action a in state s and following µ for the following timesteps, and V µ (s) = a µ(a|s)R a s da corresponds to the value function of µ. This objective differs from the ones used in the derivations of related algorithms, such as RWR and



Supplementary video: sites.google.com/view/awr-supp/

