

Abstract

Simulated environments with procedurally generated content have become popular benchmarks for testing systematic generalization of reinforcement learning agents. Every level in such an environment is algorithmically created, thereby exhibiting a unique configuration of underlying factors of variation, such as layout, positions of entities, asset appearances, or even the rules governing environment transitions. Fixed sets of training levels can be determined to aid comparison and reproducibility, and test levels can be held out to evaluate the generalization and robustness of agents. While prior work samples training levels in a direct way (e.g. uniformly) for the agent to learn from, we investigate the hypothesis that different levels provide different learning progress for an agent at specific times during training. We introduce Prioritized Level Replay, a general framework for estimating the future learning potential of a level given the current state of the agent's policy. We find that temporal-difference (TD) errors, while previously used to selectively sample past transitions, also prove effective for scoring a level's future learning potential when the agent replays (that is, revisits) that level to generate entirely new episodes of experiences from it. We report significantly improved sample-efficiency and generalization on the majority of Procgen Benchmark environments as well as two challenging MiniGrid environments. Lastly, we present a qualitative analysis showing that Prioritized Level Replay induces an implicit curriculum, taking the agent gradually from easier to harder levels.

1. INTRODUCTION

Environments generated using procedural content generation (PCG) have garnered increasing interest in RL research, leading to a surge of PCG environments such as MiniGrid (Chevalier-Boisvert et al., 2018) , the Obstacle Tower Challenge (Juliani et al., 2019) , the Procgen Benchmark (Cobbe et al., 2019) , and the NetHack Learning Environment (Küttler et al., 2020) . Unlike singleton environments, like those in the Arcade Learning Environment benchmark (Bellemare et al., 2013) , which are exploitable by memorization and deterministic reset strategies (Ecoffet et al., 2019; 2020) , PCG environments create novel environment instances or levels algorithmically. Every such level exhibits a unique configuration of underlying factors of variation, such as layout, positions of game entities, asset appearances, or even different rules governing environment transitions, making them a promising target for evaluating systematic generalization in RL (Risi & Togelius, 2020) . Each level can be associated with a level identifier (e.g. an index, a random number generator seed, etc.) used by the PCG algorithm to generate a specific level. This allows for a clean notion of train-test split and testing on held-out levels, in line with the best practices from supervised learning. Due to the variation among algorithmically-generated levels, a random selection of PCG levels can, in principle, correspond to levels of varying difficulty as well as reveal different-perhaps rare-environment dynamics. This diversity among levels implies that different levels hold differing learning potential for an agent at any point in training, a fact exploited by methods that both learn to generate levels as well as to solve them (Wang et al., 2019; 2020) . Here, we focus on the less intrusive setting where we do not have control over level generation, but can instead replay (that is, revisit) any previously visited level during training to generate entirely new experiences from it. We introduce Prioritized Level Replay, illustrated in Figure 1 , a method for sampling training levels that exploits this difference among levels. Our method induces a level-sampling distribution that prioritizes levels based on the learning potential of replaying each level anew. Throughout agent training, our method updates a heuristic score appraising the agent's learning potential on a level based on temporal-difference (TD) errors collected along the last trajectory sampled from that level. Rather than sampling from the default, typically static (e.g. uniform), training level distribution, our method samples from a distribution derived from a normalization procedure over these level scores. Prioritized Level Replay does not make any assumption about how the policy is updated, and is therefore compatible with any RL method. Furthermore, our method does not rely on some external or general method for quantifying difficulty of a level, but instead derives a level score directly from the policy. The only requirements-satisfied in a wide number of settings where a simulator or game is used to collect experience-are as follows: (i) some notion of "level" exists, (ii) such levels can be sampled from the environment in an identifiable way, and (iii) given a level identifier, it is possible to set the environment to that level to be able to collect new experiences from it. While previous works in off-policy RL devised effective methods to directly reuse past experiences for training (Schaul et al., 2015; Andrychowicz et al., 2017) , Prioritized Level Replay uses past experiences to inform the collection of future experiences by assessing how much replaying each level anew will benefit learning. Our method can thus be seen as a forward-view variation of prioritized experience replay, and an online counterpart to this off-policy method for policy-gradient algorithms. This paper makes the following core contributions: (i) we introduce a computationally efficient algorithm for adaptively prioritizing levels throughout training using a heuristic-based assessment of the learning potential of replaying each level, (ii) we empirically demonstrate our method leads to significant gains on 11 of 16 Procgen Benchmark environments and two challenging MiniGrid environments, (iii) we demonstrate our method combines with a previous leading method to set a new state-of-the-art on Procgen Benchmark, and (iv) we provide evidence that our method induces an implicit curriculum over training levels in sparse reward settings.

2. BACKGROUND

In this paper, we refer to a PCG environment as any computational process that, given a level identifier (e.g. an index or a seed), generates a level, defined as an environment instance exhibiting a unique configuration of its underlying factors of variation, such as layout, positions of game entities, asset appearances, or even rules that govern the environment transitions (Risi & Togelius, 2020). For example, MiniGrid's MultiRoom environment instantiates mazes with varying numbers of rooms based on the seed (Chevalier-Boisvert et al., 2018) . We refer to sampling a new trajectory generated from the agent's latest policy acting on a given level l as replaying that level l. The level diversity of PCG environments makes them useful testbeds for studying the robustness and generalization ability of RL agents, measured by agent performance on unseen test levels. The standard test evaluation protocol for PCG environments consists of training the agent on a finite number of training levels, Λ train and evaluating performance on unseen test levels Λ test , drawn from the set of all levels. A common variation of this protocol sets Λ train to the set of all levels, though the



Figure 1: Overview of Prioritized Level Replay. The next level is either sampled from a distribution with support over unseen levels (top), which could be the environment's (perhaps implicit) full training-level distribution, or alternatively, sampled from the replay distribution, which prioritizes levels based on future learning potential (bottom). In either case, a trajectory τ is sampled from the next level and used to update the replay distribution. This update depends on the lists of previously seen levels Λ seen , their latest estimated learning potentials S, and last sampled timestamps C.

