RANK THE EPISODES: A SIMPLE APPROACH FOR EXPLORATION IN PROCEDURALLY-GENERATED ENVIRONMENTS

Abstract

Exploration under sparse reward is a long-standing challenge of model-free reinforcement learning. The state-of-the-art methods address this challenge by introducing intrinsic rewards to encourage exploration in novel states or uncertain environment dynamics. Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once. Motivated by how humans distinguish good exploration behaviors by looking into the entire episode, we introduce RAPID, a simple yet effective episode-level exploration method for procedurally-generated environments. RAPID regards each episode as a whole and gives an episodic exploration score from both per-episode and long-term views. Those highly scored episodes are treated as good exploration behaviors and are stored in a small ranking buffer. The agent then imitates the episodes in the buffer to reproduce the past good exploration behaviors. We demonstrate our method on several procedurally-generated MiniGrid environments, a first-person-view 3D Maze navigation task from MiniWorld, and several sparse MuJoCo tasks. The results show that RAPID significantly outperforms the state-of-the-art intrinsic reward strategies in terms of sample efficiency and final performance. The code is available at https://github.com/daochenzha/rapid.

1. INTRODUCTION

Deep reinforcement learning (RL) is widely applied in various domains (Mnih et al., 2015; Silver et al., 2016; Mnih et al., 2016; Lillicrap et al., 2015; Andrychowicz et al., 2017; Zha et al., 2019a; Liu et al., 2020) . However, RL algorithms often require tremendous number of samples to achieve reasonable performance (Hessel et al., 2018) . This sample efficiency issue becomes more pronounced in sparse reward environments, where the agent may take an extremely long time before bumping into a reward signal (Riedmiller et al., 2018) . Thus, how to efficiently explore the environment under sparse reward remains an open challenge (Osband et al., 2019) . To address the above challenge, many exploration methods have been investigated and demonstrated to be effective on hard-exploration environments. One of the most popular techniques is to use intrinsic rewards to encourage exploration (Schmidhuber, 1991; Oudeyer & Kaplan, 2009; Guo et al., 2016; Zheng et al., 2018; Du et al., 2019) . The key idea is to give intrinsic bonus based on uncertainty, e.g., assigning higher rewards to novel states (Ostrovski et al., 2017) , or using the prediction error of a dynamic model as the intrinsic reward (Pathak et al., 2017) . While many intrinsic reward methods have demonstrated superior performance on hard-exploration environments, such as Montezuma's Revenge (Burda et al., 2018b) and Pitfall! (Badia et al., 2019) , most of the previous studies use the same singleton environment for training and testing, i.e., the agent aims to solve the same environment in each episode. However, recent studies show that the agent trained in this way is susceptible to overfitting and may fail to generalize to even a slightly different environment (Rajeswaran et al., 2017; Zhang et al., 2018a) . To deal with this issue, a few procedually-generated environments

