TIME-MYOPIC GO-EXPLORE: LEARNING A STATE REP-RESENTATION FOR THE GO-EXPLORE PARADIGM

Abstract

Very large state spaces with a sparse reward signal are difficult to explore. The lack of a sophisticated guidance results in a poor performance for numerous reinforcement learning algorithms. In these cases, the commonly used random exploration is often not helpful. The literature shows that this kind of environments require enormous efforts to systematically explore large chunks of the state space. Learned state representations can help here to improve the search by providing semantic context and build a structure on top of the raw observations. In this work we introduce a novel time-myopic state representation that clusters temporally close states together while providing a time prediction capability between them. By adapting this model to the Go-Explore paradigm (Ecoffet et al., 2021b), we demonstrate the first learned state representation that reliably estimates novelty instead of using the hand-crafted representation heuristic. Our method shows an improved solution for the detachment problem which still remains an issue at the Go-Explore Exploration Phase. We provide evidence that our proposed method covers the entire state space with respect to all possible time trajectories -without causing disadvantageous conflict-overlaps in the cell archive. Analogous to native Go-Explore, our approach is evaluated on the hard exploration environments MontezumaRevenge, Gravitar and Frostbite (Atari) in order to validate its capabilities on difficult tasks. Our experiments show that time-myopic Go-Explore is an effective alternative for the domain-engineered heuristic while also being more general. The source code of the method is available on GitHub: made.public.after.acceptance.

1. INTRODUCTION

In recent years, the problem of sufficient and reliable exploration remains an area of research in the domain of reinforcement learning. In this effort, an agent seeks to maximize its extrinsic discounted sum of rewards without ending up with a sub-optimal behavior policy. A good exploration mechanism should encourage the agent to seek novelty and dismiss quick rewards for a healthy amount of time to evaluate long-term consequences of the action selection. Four main issues are involved when performing an exploration in a given environment: (i) catastrophic forgetting (Goodfellow et al., 2013) as the data distribution shifts because the policy changes, (ii) a neural network's overconfident evaluation of unseen states (Zhang et al., 2018) , (iii) sparse reward Markov Decision Processes and (iv) the exploration-exploitation trade-off (Sutton and Barto, 2018; Ecoffet et al., 2021a) . The latter causes a significant problem: the greater the exploitation the less exploration is done. Hereby, making novelty search less important when the agent can easily reach states with a large rewards. To address these difficulties, Ecoffet et al. (2021b) propose a new approach called Go-Explore and achieve state-of-the-art results on hard exploration problems. However, Go-Explore relies on hand-crafted heuristics. In our work we replace their discrete state representations with learned representations particularly designed to estimate elapsed time to improve the novelty estimation. The time distance between two states is used to build an abstraction level on top of the raw observations by grouping temporally close states. Equipped with this capability, our model can decide about the acceptance or rejection of states for the archive memory and maintain additionally exploration statistics. Contribution. This work attempts to improve the exploration problem by introducing a learnable novelty estimation method that is applicable to arbitrary inputs. In the experiment section, we demonstrate the reliability of our method by comparing its results with native Go-Explore and more broadly with several other exploration-based and baseline approaches. The main contributions of this paper are the following: 1. We introduce a new novelty estimation method consisting of a siamese encoder and a timemyopic prediction network which learns problem-specific representations that are useful for time prediction. The time distances are later used to determine novelty which generate a time-dependent state abstraction. 2. We implement a new archive for Go-Explore that includes a new insertion criterion, a novel strategy to count cell visits and a new selection mechanism for cell restarts.

2. PROBLEMS OF GO-EXPLORE

Go-Explore maintains an archive of saved states that are used as milestones to be able to restart from intermediate states, hereby prevent detachment and derailment (as discussed in Ecoffet et al. ( 2021a)). It generates its state representation by down-scaling and removing color (see Figure 2 left ). The exact representation depends on three hyperparameters (width, height, pixel-depth), which have to be tuned for each environment. If two distinct observations generate the same encoding they are considered similar and dissimilar otherwise. Thus, all possible states are grouped into a fixed number of representatives, which leads to overlap-conflicts when two distinct observations receive the same encoding. In these cases one of the states has to be abandoned, which might be the reason that for the Atari environment Montezuma's Revenge only 57 out of 100 runs reached the second level in the game (as reported in Ecoffet et al. (2021a) ). We conjecture that their replacement criterion favors a certain state over another, so the exploration will be stopped at the abandoned state. This can result in decoupling an entire state subspace from the agent's exploration endeavor. A related issue emerges by ignoring the spatio-temporal semantics between states that are grouped together into a single representation. The down-scaling method is just compressing the image information without considering its semantic content. The consequence is a replacement criterion that resolves archive conflicts between states that are neither spatially nor temporally in a close relationship. Therefore a conflict solver has to resolve illogical and non-intuitive conflicts, because in these cases it is not obvious which state should be favored. We have no information about which potentially reachable states are more promising. On top of that, the cell representation is only suitable for states represented by small images. If we have images with higher resolutions the representations



Figure 1: Model architecture. For details see Section 3.

