ACCELERATING INVERSE REINFORCEMENT LEARNING WITH EXPERT BOOTSTRAPPING

Abstract

Existing inverse reinforcement learning methods (e.g. MaxEntIRL, f -IRL) search over candidate reward functions and solve a reinforcement learning problem in the inner loop. This creates a rather strange inversion where a harder problem, reinforcement learning, is in the inner loop of a presumably easier problem, imitation learning. In this work, we show that better utilization of expert demonstrations can reduce the need for hard exploration in the inner RL loop, hence accelerating learning. Specifically, we propose two simple recipes: (1) placing expert transitions into the replay buffer of the inner RL algorithm (e.g. Soft-Actor Critic) which directly informs the learner about high reward states instead of forcing the learner to discover them through extensive exploration, and (2) using expert actions in Q value bootstrapping in order to improve the target Q value estimates and more accurately describe high value expert states. Our methods show significant gains over a MaxEntIRL baseline on the benchmark MuJoCo suite of tasks, speeding up recovery to 70% of deterministic expert performance by 2.13x on HalfCheetah-v2, 2.6x on 

1. INTRODUCTION

The core problem in inverse reinforcement learning (IRL) is to recover a reward function that explains the expert's actions as being optimal, and a policy that is optimal with respect to this reward function, thus matching expert behavior. Existing methods like MaxEntIRL (Ziebart et al., 2008) and f -IRL (Ni et al., 2020) accomplish this by running an outer-loop that updates a reward function and an inner-loop that runs reinforcement learning (RL), usually many steps of policy iteration. However, running RL in the inner-loop results in high sample and computational complexity compared to IL (Sun et al., 2017) . Specifically, this requires large numbers of learner rollouts. Learner rollouts can be expensive, especially with high fidelity, complex simulators, and in the real world, where excessive rollouts can lead to an elevated risk of damage to the physical agent or system. It is therefore important to study methods for accelerating the inner RL loop. Our key insight is that instead of treating the inner RL as some black box policy optimization, we can provide valuable information about potentially high reward regions that can significantly accelerate learning. We propose two simple recipes that are applicable to a wide class of inner RL solvers, notably any actor-critic approaches (e.g. Soft-Actor Critic (SAC) (Haarnoja et al., 2018) ): 1. Place expert transitions into the actor's replay buffer. These transitions contain high reward states that accelerate policy learning and reduce the amount of exploration required to discover such high reward states. We call this method expert replay bootstrapping (ERB). 2. Use the expert's next action from each transition in the critic's target Q-value estimator. By leveraging such side information, we more accurately describe high value expert states and improve the estimate of the next state's target value. We call this method expert Q bootstrapping (EQB). In general, the critic's target value estimate is derived from the actor and the actor's optimization objective is derived from the critic. This creates a mutual bond where neither side can move forward without the other progressing accordingly. When the policy is lagging behind due to its inability to effectively maximize a potentially complex Q function surface, the policy's action may be quite suboptimal. This creates lower critic target value estimates for expert states and slows learning. However, in the imitation learning setting, we have access to expert demonstrations that allow the critic to progress in its learning without being held back by the actor. In fact, the critic's value function surface is exactly derived from these expert demonstrations -hence, the expert demonstrations are maximizing value. By leveraging expert Q bootstrapping and providing accurate targets using the expert's next action, we allow the critic to progress as if it were working with a stronger policy. Accurate targets allow the Q function to progress in its learning and provide better signals to the policy, further accelerating learning. We show that our methods are able to accelerate multiple state-of-the-art inverse reinforcement learning algorithms such as MaxEntIRL (Ziebart et al., 2008) and f -IRL (Ni et al., 2020) . We believe that our methods are especially helpful on hard exploration problems (i.e. problems where many actions lead to low reward, while few, sparse actions lead to high reward, like in the toy tree MDP in Section 6). In these types of problems, it is difficult to rely on the learner to find high reward areas of the space through hard exploration, and informing the learner of expert states and actions through expert bootstrapping can significantly accelerate recovery of expert performance. In summary, the main contributions of this paper are two recipes, ERB and EQB, which can be added onto state-of-the-art inverse reinforcement learning algorithms (with few lines of code) for accelerated learning. Empirically, we show that our techniques yield significant gains on the benchmark MuJoCo suite of tasks. In addition, we explain when and why our techniques are helpful through the study of a simple toy tree MDP.

2.1. LEVERAGING EXPERT DEMONSTRATIONS IN REINFORCEMENT LEARNING

Reinforcement learning algorithms (e.g. SAC (Haarnoja et al., 2018) , DQN (Mnih et al., 2013), and PPO (Schulman et al., 2017) ) aim to find an optimal policy by interacting with an MDP. Solving a reinforcement learning problem can require extensive exploration throughout a space to find potentially sparse reward. A standard practice is to bootstrap RL policies with a behavior cloning policy (Cheng et al., 2018; Sun et al., 2018) . Deep Q Learning from Demonstrations(DQfD) (Hester et al., 2018) and Human Experience Replay (Hosu & Rebedea, 2016) propose to accelerate the exploration process by inserting expert transitions into the policy replay buffer in order to inform the learner of high reward states. However, these RL approaches assume access to stationary ground truth rewards, which is not the case in imitation learning where rewards are being learnt over time.

2.2. IMITATION LEARNING

Imitation learning algorithms attempt to find a policy that imitates a given set of expert demonstrations without access to ground truth rewards. Based on assumptions of available information, imitation learning algorithms can be broadly classified into three categories: offline (e.g. In contrast, we focus on general online inverse reinforcement learning methods. Adversarial imitation learning methods (e.g. AIRL (Fu et al., 2018 ), GAIL (Ho & Ermon, 2016) ) attempt to find an optimal policy by using a discriminator instead of a reward function, and running policy optimization in the outer loop, as opposed to the inner loop in MaxEntIRL. Similar to our work, SQIL, or Soft-Q Imitation Learning (Reddy et al., 2019) inserts expert transitions into the learner replay buffer. However, SQIL only uses rewards of 0 for all learner transitions and rewards of 1 for all



Behavior Cloning), interactive expert (e.g.DAgger (Ross et al., 2011), AggreVaTe (Ross & Bagnell, 2014)), or interactive simulator (e.g.MaxEntIRL (Ziebart et al., 2008)). While offline algorithms such as offline IQ-Learn (Garg et al., 2021) and AVRIL (Chan & van der Schaar, 2021) are sample efficient at leveraging expert data, they suffer from covariate shift due to the mismatch between expert and learner distributions. In this work, we use online inverse reinforcement learning algorithms to combat covariate shift and hence assume access to an interactive simulator.

