GRAPH BACKUP: DATA EFFICIENT BACKUP EXPLOIT-ING MARKOVIAN TRANSITIONS

Abstract

The successes of deep Reinforcement Learning (RL) are limited to settings where we have a large stream of online experiences, but applying RL in the data-efficient setting with limited access to online interactions is still challenging. A key to data-efficient RL is good value estimation, but current methods in this space fail to fully utilise the structure of the trajectory data gathered from the environment. In this paper, we treat the transition data of the MDP as a graph, and define a novel backup operator, Graph Backup, which exploits this graph structure for better value estimation. Compared to multi-step backup methods such as n-step Q-Learning and TD(λ), Graph Backup can perform counterfactual credit assignment and gives stable value estimates for a state regardless of which trajectory the state is sampled from. Our method, when combined with popular off-policy value-based methods, provides improved performance over one-step and multi-step methods on a suite of data-efficient RL benchmarks including MiniGrid, Minatar and Atari100K. We further analyse the reasons for this performance boost through a novel visualisation of the transition graphs of Atari games.

1. INTRODUCTION

Deep Reinforcement Learning (DRL) methods have achieved super-human performance in a varied range of games (Mnih et al., 2015; Silver et al., 2016; Berner et al., 2019; Vinyals et al., 2019) . All of these present a proof of existence for DRL: with a large amount of online interaction, DRL-trained policies can learn to solve problems that have similar properties to real-world decision-making tasks. However, most real-world tasks such as autonomous driving or financial trading are hard to simulate, and generating new interaction data can be expensive. This makes it crucial to develop data-efficient RL approaches that solve sequential decision-making problems with limited online environment interactions. As many existing DRL algorithms assume access to a simulator they don't focus on efficiently using the available data as it's always cheaper to simply generate fresh data from the simulator. Data is normally stored in a buffer and only used several times for learning before being discarded. However, there is lots of additional structure in the transition data, and a key insight of our work is to organise the trajectories stored in the buffer as a graph (For example see Figure 1 (a) which shows a visualisation of the transition graph of the Atari game Frostbite). Our method, Graph Backup, then exploits this transition graph to provide a novel backup operator for bootstrapped value estimation. When estimating the value of a state, it will combine information from a subgraph rooted at the target state, including rewards and value estimates for future states. When the environment has Markovian transitions and crossovers between trajectories, the construction of this data graph provides several benefits. As discussed in Section 4.2, our method exploits intersecting trajectories to correctly propagate reward to more states, effectively by propagating reward along an imagined trajectory. Further, while existing improvements to one-step backup (as used in by Mnih et al. (2015) ) such as multi-step backup (Moriarty & Miikkulainen, 1995; Hessel et al., 2018; Sutton & Barto, 2018) address the problem of slow reward information propagation (Hernandez-Garcia & Sutton, 2019), they add variance to the state value estimates as different states can have different values estimates depending on the trajectory they were sampled from. Our method addresses this issue by grouping states in the transition graph and averaging over outgoing transitions at the value estimation stage. 

2. RELATED WORK

The idea of multi-step backup algorithms (e.g. TD(λ), n-step TD) dates back to early work in tabular reinforcement learning (Sutton, 1988; Sutton & Barto, 2018) . Two approaches to multistep targets are n-step methods and eligibility trace methods. The n-step method is a natural extension of using a one-step target that takes the rewards and value estimations of n steps into future into consideration. For example, the n-step SARSA (Rummery & Niranjan, 1994; Sutton & Barto, 2018) target for step t is simply the sum of n-step rewards and the value at timestep t + n: R t+1 + R t+2 + ... + R t+n-1 + V (S t+n ). Graph Backup is an extension of an n-step backup target, Tree Backup, which will be described in Section 3. Eligibility trace (Sutton, 1988) methods instead estimate the λ-return, which is an infinite weighted sum of n-step returns. The advantage of the eligibility trace method is it can be computed in an online manner without explicit storage of all the past experiences, while still computing accurate target value estimates. However, in the context of off-policy RL, eligibility traces are not widely applied because the use of a replay buffer means all past experiences are already stored. In addition, eligibility traces are designed for the case with a linear function approximator, and it's nontrivial to apply them to neural networks. van Hasselt et al. (2021) proposed an extension of the eligibility trace method called expected eligibility traces. Similar to Graph Backup, this allows information propagation across different episodes and thus enables counterfactual credit assignment. However, similar to the original eligibility traces methods, it is a better fit for the linear and on-policy case, whereas Graph Backup is designed for the non-linear and off-policy cases. 2018). These MCTS-based algorithms also share some similarities with Graph Backup as they also utilise tree-structured search algorithms. However, our work is aimed at model-free RL, and so is separate from these works. Several recent works have also utilised the graph structure of MDP transition data. Zhu et al. (2020) propose to use the MDP graph as an associative memory to improve Episodic Reinforcement Learning



Figure 1: (a) shows the transition graph Frostbite, an Atari game, extracted from a replace buffer of a Graph Backup agent after 100k steps. (b) shows backup diagrams for different backup targets. The circles are states, the blue squares represent the actions that have been observed for the given state node, and orange squares are actions where target net evaluation happened

Since a learned model can be treated as a distilled replay buffer (van Hasselt et al., 2019), we can view model-based reinforcement learning as related to our work. Recent examples include Schrittwieser et al. (2020); Hessel et al. (2021); Farquhar et al. (2018); Hafner et al. (2021b); Kaiser et al. (2020b); Ha & Schmidhuber (

