EVALUATING LONG-TERM MEMORY IN 3D MAZES

Abstract

Intelligent agents need to remember salient information to reason in partiallyobserved environments. For example, agents with a first-person view should remember the positions of relevant objects even if they go out of view. Similarly, to effectively navigate through rooms agents need to remember the floor plan of how rooms are connected. However, most benchmark tasks in reinforcement learning do not test long-term memory in agents, slowing down progress in this important research direction. In this paper, we introduce the Memory Maze, a 3D domain of randomized mazes specifically designed for evaluating long-term memory in agents. Unlike existing benchmarks, Memory Maze measures long-term memory separate from confounding agent abilities and requires the agent to localize itself by integrating information over time. With Memory Maze, we propose an online reinforcement learning benchmark, a diverse offline dataset, and an offline probing evaluation. Recording a human player establishes a strong baseline and verifies the need to build up and retain memories, which is reflected in their gradually increasing rewards within each episode. We find that current algorithms benefit from training with truncated backpropagation through time and succeed on small mazes, but fall short of human performance on the large mazes, leaving room for future algorithmic designs to be evaluated on the Memory Maze.

1. INTRODUCTION

Deep reinforcement learning (RL) has made tremendous progress in recent years, outperforming humans on Atari games (Mnih et al., 2015; Badia et al., 2020 ), board games (Silver et al., 2016; Schrittwieser et al., 2019) , and advances in robot learning (Akkaya et al., 2019; Wu et al., 2022) . Much of this progress has been driven by the availability of challenging benchmarks that are easy to use and allow for standardized comparison (Bellemare et al., 2013; Tassa et al., 2018; Cobbe et al., 2020) . What is more, the RL algorithms developed on these benchmarks are often general enough

Agent Inputs

Underlying Trajectory t = 0 30 60 90 120 150 Figure 1 : The first 150 time steps of an episode in the Memory Maze 9x9 environment. The bottom row shows the top-down view of a randomly generated maze with 3 colored objects. The agent only observes the first-person view (top row) which includes a prompt for the next object to find as a border of the corresponding color. The agent receives +1 reward when it reaches the object of the prompted color. During the episode, the agent has to visit the same objects multiple times, testing its ability to memorize their positions, the way the rooms are connected, and its own location.

availability

https://github.com/jurgisp/

