BLENDING MPC & VALUE FUNCTION APPROXIMATION FOR EFFICIENT REINFORCEMENT LEARNING

Abstract

Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems that uses a model to make predictions about future behavior. For each state encountered, MPC solves an online optimization problem to choose a control action that will minimize future cost. This is a surprisingly effective strategy, but real-time performance requirements warrant the use of simple models. If the model is not sufficiently accurate, then the resulting controller can be biased, limiting performance. We present a framework for improving on MPC with model-free reinforcement learning (RL). The key insight is to view MPC as constructing a series of local Q-function approximations. We show that by using a parameter λ, similar to the trace decay parameter in TD(λ), we can systematically trade-off learned value estimates against the local Q-function approximations. We present a theoretical analysis that shows how error from inaccurate models in MPC and value function estimation in RL can be balanced. We further propose an algorithm that changes λ over time to reduce the dependence on MPC as our estimates of the value function improve, and test the efficacy our approach on challenging high-dimensional manipulation tasks with biased models in simulation. We demonstrate that our approach can obtain performance comparable with MPC with access to true dynamics even under severe model bias and is more sample efficient as compared to model-free RL.

1. INTRODUCTION

Model-free Reinforcement Learning (RL) is increasingly used in challenging sequential decision-making problems including high-dimensional robotics control tasks (Haarnoja et al., 2018; Schulman et al., 2017) as well as video and board games (Silver et al., 2016; 2017) . While these approaches are extremely general, and can theoretically solve complex problems with little prior knowledge, they also typically require a large quantity of training data to succeed. In robotics and engineering domains, data may be collected from real-world interaction, a process that can be dangerous, time consuming, and expensive. Model-Predictive Control (MPC) offers a simpler, more practical alternative. While RL typically uses data to learn a global model offline, which is then deployed at test time, MPC solves for a policy online by optimizing an approximate model for a finite horizon at a given state. This policy is then executed for a single timestep and the process repeats. MPC is one of the most popular approaches for control of complex, safetycritical systems such as autonomous helicopters (Abbeel et al., 2010) , aggressive off-road vehicles (Williams et al., 2016) and humanoid robots (Erez et al., 2013) , owing to its ability to use approximate models to optimize complex cost functions with nonlinear constraints (Mayne et al., 2000; 2011) . However, approximations in the model used by MPC can significantly limit performance. Specifically, model bias may result in persistent errors that eventually compound and become catastrophic. For example, in non-prehensile manipulation, practitioners often use a simple quasi-static model that assumes an object does not roll or slide away when pushed. For more dynamic objects, this can lead to aggressive pushing policies that perpetually over-correct, eventually driving the object off the surface. Recently, there have been several attempts to combine MPC with model free RL, showing that the combination can improve over the individual approaches alone. Many of these approaches involve using RL to learn a terminal cost function, thereby increasing the effective horizon of MPC (Zhong et al., 2013; Lowrey et al., 2018; Bhardwaj et al., 2020) . However, the learned value function is only applied at the end of the MPC horizon. Model errors would still persist in horizon, leading to sub-optimal policies. Similar approaches have also been applied to great effect in discrete games with known models (Silver et al., 2016; 2017; Anthony et al., 2017) , where value functions and policies learned via model-free RL are used to guide Monte-Carlo Tree Search. In this paper, we focus on a somewhat broader question: can machine learning be used to both increase the effective horizon of MPC, while also correcting for model bias? One straightforward approach is to try to learn (or correct) the MPC model from real data encountered during execution; however there are some practical barriers to this strategy. Hand-constructed models are often crude-approximations of reality and lack the expressivity to represent encountered dynamics. Moreover, increasing the complexity of such models leads to computationally expensive updates that can harm MPC's online performance. Model-based RL approaches such as Chua et al. ( 2018 2019) aim to learn general neural network models directly from data. However, learning globally consistent models is an exceptionally hard task due to issues such as covariate shift (Ross & Bagnell, 2012) . We propose a framework, MPQ(λ), for weaving together MPC with learned value estimates to trade-off errors in the MPC model and approximation error in a learned value function. Our key insight is to view MPC as tracing out a series of local Q-function approximations. We can then blend each of these Q-functions with value estimates from reinforcement learning. We show that by using a blending parameter λ, similar to the trace decay parameter in TD(λ), we can systematically trade-off errors between these two sources. Moreover, by smoothly decaying λ over learning episodes we can achieve the best of both worlds -a policy can depend on a prior model before its has encountered any data and then gradually become more reliant on learned value estimates as it gains experience. To summarize, our key contributions are: 1. A framework that unifies MPC and Model-free RL through value function approximation. 2. Theoretical analysis of finite horizon planning with approximate models and value functions. 3. Empirical evaluation on challenging manipulation problems with varying degrees of model-bias.

2.1. REINFORCEMENT LEARNING

We consider an agent acting in an infinite-horizon discounted Markov Decision Process (MDP). An MDP is defined by a tuple M = (S,A,c,P,γ,µ) where S is the state space, A is the action space, c(s,a) is the per-step cost function, s t+1 ∼ P (•|s t ,a t ) is the stochastic transition dynamics and γ is the discount factor and µ(s 0 ) is a distribution over initial states. A closed-loop policy π(•|s) outputs a distribution over actions given a state. Let µ π M be the distribution over state-action trajectories obtained by running policy π on M. The value function for a given policy π, is defined as V π M (s) = E µ π M [ ∞ t=0 γ t c(s t ,a t )|s 0 =s] and the action-value function as Q π M (s, a) = E µ π M [ ∞ t=0 γ t c(s t ,a t )|s 0 =s,a 0 =a]. The objective is to find an optimal policy π * = argmin π E s0∼µ [V π M (s 0 )]. We can also define the (dis)-advantage function A π M (s,a) = Q π M (s,a)-V π (s), which measures how good an action is compared to the action taken by the policy in expectation. It can be equivalently expressed in terms of the Bellman error as A π M (s,a)=c(s,a)+γE s ∼P,a ∼π [Q π M (s ,a )]-E a∼π [Q π M (s,a)].

2.2. MODEL-PREDICTIVE CONTROL

MPC is a widely used technique for synthesizing closed-loop policies for MDPs. Instead of trying to solve for a single, globally optimal policy, MPC follows a more pragmatic approach of optimizing simple, local policies online. At every timestep on the system, MPC uses an approximate model of the environment to search for a parameterized policy that minimizes cost over a finite horizon. An action is sampled from the policy and executed on the system. The process is then repeated from the next state, often by warm-starting the optimization from the previous solution. We formalize this process as solving a simpler surrogate MDP M = (S,A,ĉ, P ,γ,μ,H) online, which differs from M by using an approximate cost function ĉ, transition dynamics P and limiting horizon to H. Since it plans to a finite horizon, it's also common to use a terminal state-action value function Q that estimates the cost-to-go. The start state distribution μ is a dirac-delta function centered on the current state s 0 =s t . MPC can be viewed as iteratively constructing an estimate of the Q-function of the original MDP M, given policy π φ at state s: Q φ H (s,a)=E µ π φ M H-1 i=0 γ i ĉ(s i ,a i )+γ H Q(s H ,a H )|s 0 =s,a 0 =a (1)



); Nagabandi et al. (2018); Shyam et al. (

