ACCELERATING INVERSE REINFORCEMENT LEARNING WITH EXPERT BOOTSTRAPPING

Abstract

Existing inverse reinforcement learning methods (e.g. MaxEntIRL, f -IRL) search over candidate reward functions and solve a reinforcement learning problem in the inner loop. This creates a rather strange inversion where a harder problem, reinforcement learning, is in the inner loop of a presumably easier problem, imitation learning. In this work, we show that better utilization of expert demonstrations can reduce the need for hard exploration in the inner RL loop, hence accelerating learning. Specifically, we propose two simple recipes: (1) placing expert transitions into the replay buffer of the inner RL algorithm (e.g. Soft-Actor Critic) which directly informs the learner about high reward states instead of forcing the learner to discover them through extensive exploration, and (2) using expert actions in Q value bootstrapping in order to improve the target Q value estimates and more accurately describe high value expert states. Our methods show significant gains over a MaxEntIRL baseline on the benchmark MuJoCo suite of tasks, speeding up recovery to 70% of deterministic expert performance by 2.13x on HalfCheetah-v2, 2.6x on 

1. INTRODUCTION

The core problem in inverse reinforcement learning (IRL) is to recover a reward function that explains the expert's actions as being optimal, and a policy that is optimal with respect to this reward function, thus matching expert behavior. Existing methods like MaxEntIRL (Ziebart et al., 2008) and f -IRL (Ni et al., 2020) accomplish this by running an outer-loop that updates a reward function and an inner-loop that runs reinforcement learning (RL), usually many steps of policy iteration. However, running RL in the inner-loop results in high sample and computational complexity compared to IL (Sun et al., 2017) . Specifically, this requires large numbers of learner rollouts. Learner rollouts can be expensive, especially with high fidelity, complex simulators, and in the real world, where excessive rollouts can lead to an elevated risk of damage to the physical agent or system. It is therefore important to study methods for accelerating the inner RL loop. Our key insight is that instead of treating the inner RL as some black box policy optimization, we can provide valuable information about potentially high reward regions that can significantly accelerate learning. We propose two simple recipes that are applicable to a wide class of inner RL solvers, notably any actor-critic approaches (e.g. Soft-Actor Critic (SAC) (Haarnoja et al., 2018) ): 1. Place expert transitions into the actor's replay buffer. These transitions contain high reward states that accelerate policy learning and reduce the amount of exploration required to discover such high reward states. We call this method expert replay bootstrapping (ERB). 2. Use the expert's next action from each transition in the critic's target Q-value estimator. By leveraging such side information, we more accurately describe high value expert states and improve the estimate of the next state's target value. We call this method expert Q bootstrapping (EQB). In general, the critic's target value estimate is derived from the actor and the actor's optimization objective is derived from the critic. This creates a mutual bond where neither side can move forward without the other progressing accordingly. When the policy is lagging behind due to its inability to effectively maximize a potentially complex Q function surface, the policy's action may be quite

