LEARNING TO SAMPLE WITH LOCAL AND GLOBAL CONTEXTS FROM EXPERIENCE REPLAY BUFFERS

Abstract

Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL). To utilize the experience replay efficiently, the existing sampling methods allow selecting out more meaningful experiences by imposing priorities on them based on certain metrics (e.g. TD-error). However, they may result in sampling highly biased, redundant transitions since they compute the sampling rate for each transition independently, without consideration of its importance in relation to other transitions. In this paper, we aim to address the issue by proposing a new learning-based sampling method that can compute the relative importance of transition. To this end, we design a novel permutation-equivariant neural architecture that takes contexts from not only features of each transition (local) but also those of others (global) as inputs. We validate our framework, which we refer to as Neural Experience Replay Sampler (NERS) 1 , on multiple benchmark tasks for both continuous and discrete control tasks and show that it can significantly improve the performance of various off-policy RL methods. Further analysis confirms that the improvements of the sample efficiency indeed are due to sampling diverse and meaningful transitions by NERS that considers both local and global contexts.

1. INTRODUCTION

Experience replay (Mnih et al., 2015) , which is a memory that stores the past experiences to reuse them, has become a popular mechanism for reinforcement learning (RL), since it stabilizes training and improves the sample efficiency. The success of various off-policy RL algorithms largely attributes to the use of experience replay (Fujimoto et al., 2018; Haarnoja et al., 2018a; b; Lillicrap et al., 2016; Mnih et al., 2015) . However, most off-policy RL algorithms usually adopt a unique random sampling (Fujimoto et al., 2018; Haarnoja et al., 2018a; Mnih et al., 2015) , which treats all past experiences equally, so it is questionable whether this simple strategy would always sample the most effective experiences for the agents to learn. Several sampling policies have been proposed to address this issue. One of the popular directions is to develop rule-based methods, which prioritize the experiences with pre-defined metrics (Isele & Cosgun, 2018; Jaderberg et al., 2016; Novati & Koumoutsakos, 2019; Schaul et al., 2016) . Notably, since TD-error based sampling has improved the performance of various off-policy RL algorithms (Hessel et al., 2018; Schaul et al., 2016) by prioritizing more meaningful samples, i.e., high TD-error, it is one of the most frequently used rule-based methods. Here, TD-error measures how unexpected the returns are from the current value estimates (Schaul et al., 2016) . However, such rule-based sampling strategies can lead to sampling highly biased experiences. For instance, Figure 1 shows randomly selected 10 transitions among 64 transitions sampled using certain Sampling by Q-value, (c) Sampling uniformly at random. Samples highlighted in black, orange, and cyan boxes denote that their state has the rod in downward, upright, and horizontal positions with appropriate amount of actions, respectively. Samples in red boxes have excessively large actions. metrics/rules under a policy-based learning, soft actor critic (SAC) (Haarnoja et al., 2018a) , on Pendulum-v0 after 30,000 timesteps, which goal is to balance the pendulum to make it stay in the upright position. We observe that sampling by the TD-error alone mostly selects initial transitions (see Figure 1 (a)), where the rods are in the downward position, since it is difficult to estimate Q-value on them. Conversely, the sampled transitions by Q-value describe rods in the upright position (see Figure 1 (b)), which will provide high returns to agents. Both can largely contribute to the update of the actor and critic since the advantage term and mean-square of TD-errors are large. Yet, due to the bias, the agent trained in such a manner will mostly learn what to do in a specific state, but will not learn about others that should be experienced for proper learning of the agent. Therefore, such biased (and redundant) transitions may not lead to increased sample efficiency, even though each sampled transition may be individually meaningful. On the other hand, focusing only on the diversity of samples also has an issue. For instance, sampling uniformly at random is able to select out diverse transitions including intermediate states such as those in the red boxes of Figure 1 Motivated by the aforementioned observations, we aim to develop a method to sample both diverse and meaningful transitions. To cache both of them, it is crucial to measure the relative importance among sampled transitions since the diversity should be considered in them, not all in the buffer. To this end, we propose a novel neural sampling policy, which we refer to Neural Experience Replay Sampler (NERS). Our method learns to measure the relative importance among sampled transitions by extracting local and global contexts from each of them and all sampled ones, respectively. In particular, NERS is designed to take a set of each experience's features as input and compute its outputs in an equivariant manner with respect to the permutation of the set. Here, we consider various features of transition such as TD-error, Q-value and the raw transition, e.g., expecting to sample intermediate transitions as those in blue boxes of Figure 1 (c)) efficiently. To verify the effectiveness of NERS, we validate the experience replay with various off-policy RL algorithms such as soft actor-critic (SAC) (Haarnoja et al., 2018a) and twin delayed deep deterministic (TD3) (Fujimoto et al., 2018) for continuous control tasks (Brockman et al., 2016; Todorov et al., 2012) , and Rainbow (Hessel et al., 2018) for discontinuous control tasks (Bellemare et al., 2013) . Our experimental results show that NERS consistently (and often significantly for complex tasks having high-dimensional state and action spaces) outperforms both the existing the rule-based (Schaul et al., 2016) and learning-based (Zha et al., 2019) sampling methods for experience replay. In summary, our contribution is threefold: • To the best of our knowledge, we first investigate the relative importance of sampled transitions for the efficient design of experience replays. • For the purpose, we design a novel permutation-equivariant neural sampling architecture that utilizes contexts from the individual (local) and the collective (global) transitions with various features to sample not only meaningful but also diverse experiences. • We validate the effectiveness of our neural experience replay on diverse continuous and discrete control tasks with various off-policy RL algorithms, on which it consistently outperforms both existing rule-based and learning-based sampling methods.

2. NEURAL EXPERIENCE REPLAY SAMPLER

We consider a standard reinforcement learning (RL) framework, where an agent interacts with an environment over discrete timesteps. Formally, at each timestep t, the agent receives a state s t from the environment and selects an action a t based on its policy π. Then, the environment returns a reward r t , and the agent transitions to the next state s t+1 . The goal of the agent is to learn the policy π that maximizes the return R t = ∞ k=0 γ k r t+k , which is the discounted cumulative reward from the timestep t with a discount factor γ ∈ [0, 1), at each state s t . Throughout this section, we focus on off-policy actor-critic RL algorithms with a buffer B, which consist of the policy π ψ (a|s) (i.e., actor) and Q-function Q θ (s, a) (i.e., critic) with parameters ψ and θ, respectively.

2.1. OVERVIEW OF NERS

We propose a novel neural sampling policy f with parameter φ, called Neural Experience Replay Sampler (NERS). It is trained for learning to select important transitions from the experience replay buffer for maximizing the actual cumulative rewards. Specifically, at each timestep, NERS receives a set of off-policy transitions' features, which are proportionally sampled in the buffer B based on priorities evaluated in previous timesteps. Then it outputs a set of new scores from the set, in order for the priorities to be updated. Further, both the sampled transitions and scores are used to optimize the off-policy policy π ψ (a|s) and action-value function Q θ (s, a). Note that the output of NERS should be equivariant of the permutation of the set, so we design its neural architecture to satisfy the property. Next, we define the reward r re as the actual performance gain, which is defined as the difference of the expectation of the sum of rewards between the current and previous evaluation policies, respectively. Figure 2 shows an overview of the proposed framework, which learns to sample from the experience replay. In the following section, we describe our method of learning the sampling policy for experience replay and the proposed network architecture in detail. 

2.2. DETAILED COMPONENTS OF NERS

Input observations. Throughout this paper, we denote the set {1, 4) and Eq. ( 5), respectively Train the actor and critic using batch {B i } i∈I ⊂ B and corresponding weights {w i } i∈I Collect I ← I I and update P B (I) by the score set {σ k } k∈I end for for the end of an episode do Choose a subset I train from I uniformly at random such that |I train | = n Calculate r re as in Eq. ( 6) Update sampling policy φ using the gradient ( 7) with respect to I train Empty I, i.e., I ← ∅ end for end for with a hyper-parameter α > 0. Then, we define the following sequence of features for {B i } i∈I : p i = σ α i k∈[|B|] σ α k , D (B, I) = s κ(i) , a κ(i) , r κ(i) , s κ(i)+1 , κ(i), δ κ(i) , r κ(i) + γ max a Q θ s κ(i) + a i∈I , where γ is a discount factor, θ is the target network parameter, and δ κ(i) is the TD-error defined as follows: δ κ(i) = r κ(i) + γ max a Q θ s κ(i)+1 , a -Q θ s κ(i) , a κ(i) . The TD-error indicates how 'surprising' or 'unexpected' the transition is (Schaul et al., 2016) . Note that the input D (B, I) contains various features including both exact values (i.e., states, actions, rewards, next states, and timesteps) and predicted values in the long-term perspective (i.e., TD-errors and Q-values). We abbreviate the notation D (B, I) = D (I) for simplicity. Utilizing various information is crucial in selecting diverse and important transitions (see Section 3). Architecture and action spaces. Now we explain the neural network structure of NERS f . Basically, f takes D (I) as an input and generate their scores, where these values are used to sample transitions proportionally. Specifically, f consists of f l , f g , and f s called learnable local, global and score networks with output dimensions d l , d g , and 1. The local network is used to capture attributes in each transition by f l (D (I)) = f l,1 (D (I)) , • • • f l,|I| (D (I)) ∈ R |I|×d l , where f l,k (D (I)) ∈ R d l (k ∈ [|I|]). The global network is used to aggregate collective information of transitions by taking f g avg (D (I)) = fg(D(I)) |I| ∈ R 1×dg , where f g (D (I)) ∈ R |I|×dg . Then by concatenating them, one can make an input for the score network f s as follows: D cat (I) := f l,1 (D (I)) ⊕ f g avg (D (I)) , • • • , f l,|I| (D (I)) ⊕ f g avg (D (I)) ∈ R |I|×(d l +dg) , where ⊕ denotes concatenation. Finally, the score network generates a score set: f s (D cat (I)) = {σ i } i∈I ∈ R |I| . ( ) One can easily observe that f s is permutation-equivariant with respect to input D (I). The set {σ i } i∈I is used to update the priorities set P for transitions corresponding to I by Eq. ( 1) and to compute importance-sampling weights for updating the critic, compensating the bias of probabilities (Schaul et al., 2016) ): w i = 1 |B|p(i) β , where β > 0 is a hyper-parameter. Then the agent and critic receive training batch D (I) and corresponding weights {w i } i∈I for training, i.e., the learning rate for training sample B i is set to be proportional to w i . Due to this structure satisfying the permutation-equivariant property, one can evaluate the relative importance of each transition by observing not only itself but also other transitions. Reward function and optimizing sampling policy. We update NERS at each evaluation step. To optimize our sampling policy, we define the replay reward r re of the current evaluation as follows: for policies π and π used in the current and previous evaluations as in (Zha et al., 2019) , r re := E π   t∈{timesteps in an episode} r t   -E π   t∈{timesteps in an episode} r t   . The replay reward is interpreted as measuring how much actions of the sampling policy help the learning of the agent for each episode. Notice that r re only observes the difference of the mean of cumulative rewards between the current and previous evaluation policies since NERS needs to choose transitions without knowing which samples will be added and how well agents will be trained in the future. To maximize the sample efficiency for learning the agent's policy, we propose to train the sampling policy to selects past transitions in order to maximize r re . To train NERS, one can choose I train that is a subset of a index set I for totally sampled transitions in the current episode. Then we use the following formula by REINFORCE (Williams, 1992): ∇ φ E Itrain [r re ] = E Itrain r re i∈Itrain ∇ φ log p i (D (I train )) , where p i is defined in Eq. ( 1). The detailed description is provided in Algorithm 1. While ERO Zha et al. (2019) uses a similar replay-reward (Eq. 6), there are a number of fundamental differences between it and our method. First of all, ERO does not consider the relative importance between the transitions as NERS does, but rather learns an individual sampling rate for each transition. Moreover, they consider only three types of features, namely TD-error, reward, and the timestep, while NERS considers a larger set of features by considering more informative features that are not used by ERO, such as raw features, Q-values, and actions. However, the most important difference between the two is that ERO performs two-stage sampling, where they first sample with the individually learned Bernoulli sampling probability for each transition, and further perform random sampling from the subset of sampled transitions. However, with such a strategy, the first-stage sampling is highly inefficient even with moderate size experience replays, since it should compute the sampling rate for each individual instance. Accordingly, its time complexity of the first-stage sampling depends finally on the capacity of the buffer B, i.e., O (|B|). On the contrary, NERS uses a sum-tree structure as in (Schaul et al., 2016) to sample transitions with priorities, so that its time complexity for sampling depends highly on O (log |B|). Secondly, since the number of experiences selected from the first stage sampling is large, it may have little or no effect, making it to behave similarly to random sampling. Moreover, ERO updates its network with the replay reward and experiences that are not sampled from two-stage samplings but sampled by the uniform sampling at random (see Algorithm 2 in Zha et al. ( 2019)). In other words, samples that are never selected affect the training of ERO, while NERS updates its network solely based on the transitions that are actually selected by itself.

3. EXPERIMENTS

In this section, we conduct experiments to answer the following questions: • Can the proposed sampling method improve the performances of various off-policy RL algorithms for both continuous and discrete control tasks? • Is it really effective to sample diverse and meaningful samples by considering the relative importance with various contexts?

3.1. EXPERIMENTAL SETUP

Environments. In this section, we measure the performances of off-policy RL algorithms optimized with various sampling methods on the following standard continuous control environments with simulated robots (e.g., Ant-v3, Walker2D-v3, and Hopper-v3) from the MuJoCo physics engine (Todorov et al., 2012) and classical and Box2D continuous control tasks (i.e., Pendulum *foot_1 , LunarLanderContinuous-v2, and BipedalWalker-v3) from OpenAI Gym (Brockman et al., 2016) . We also consider a subset of the Atari games (Bellemare et al., 2013) to validate the effect of our experience sampler on the discrete control tasks (see Table 2 ). The detailed description for environments is explained in supplementary material. Off-policy RL algorithms. We apply our sampling policy to state-of-the-art off-policy RL algorithms, such as Twin delayed deep deterministic (TD3) (Fujimoto et al., 2018) , and soft actor-critic (SAC) (Haarnoja et al., 2018a) , for continuous control tasks. For discrete control tasks, instead of the canonical Rainbow (Hessel et al., 2018) , we use a data-efficient variant of it as introduced in (van Hasselt et al., 2019) . Notice that Rainbow already adopts PER. To compare sampling methods, we replaced it by NERS, RANDOM, and ERO in Rainbow, respectively. Due to space limitation, we provide more experimental details in the supplementary material. Baselines. We compare our neural experience replay sampler (NERS) with the following baselines: • RANDOM: Sampling transitions uniformly at random. • PER (Prioritized Experience Replay): Rule-based sampling of the transitions with high temporal difference errors (TD-errors) (Schaul et al., 2016) • ERO (Experience Replay Optimization): Learning-based sampling method (Zha et al., 2019) , which computes the sampling score for each transition independently, using TD-error, timestep, and reward as features.

3.2. COMPARATIVE EVALUATION

Figure 3 shows learning curves of each off-policy RL algorithm during training on classical and Box2D continuous control tasks, respectively. Furthermore, respectively, over five runs with random seeds, respectively. 3 We observe that NERS consistently outperforms baseline sampling methods in all tested cases. In particular, It significantly improves the performance of all off-policy RL algorithms on various tasks, which come with high-dimensional state and action spaces. These results imply that sampling good off-policy data is crucial in improving the performance of off-policy RL algorithms. Furthermore, they demonstrate the effectiveness of our method for both continuous and discrete control tasks, as it obtains significant performance gains on both types of tasks. On the other hand, we observe that PER, which is the rule-based sampling method, often shows worse performance than uniform random sampling (i.e., RANDOM) on these continuous control tasks, similarly as observed in (Zha et al., 2019) . We suspect that this is because PER is more appropriate for Q-learning based algorithms than for policy-based learning, since TD-errors are used to update the Q network. Moreover, even though ERO is a learning-based sampling method, its performance and sampling behavior is close to that of RANDOM, due to two reasons. First, it considers the importance of each transition individually by assuming the Bernoulli distribution, which may result in sampling of redundant transitions. Second, ERO performs two-stage sampling, where the transitions are first sampled due to their individual importance, and then further randomly sampled to construct a batch. However, since too many transitions are sampled in the first stage, the second-stage random sampling is similar to random sampling from the entire experience replay. 

3.3. ANALYSIS OF OUR FRAMEWORK

In this subsection, we first show that each component of NERS is crucial to improve sample efficiency (Figure 4 ). Next, we show that NERS really samples not only diverse but also meaningful transitions to update both actor and critic (Figure 5 ). Contribution by each component. We analyze NERS to better understand the effect of each component. Figure 4 (a) validates the contributions of our suggested techniques, where one can observe that the performance of NERS is significantly improved when using the full set of features. This implies that essential transitions for training can be sampled only by considering various aspects of the past experiences. Using only few features such as reward, TD-error, and timestep does not result in sampling transitions that yield high expected returns in the future. Figure 4 of the relative importance by comparing NERS with and without considering the global context; we found that the sample efficiency is significantly improved due to consideration of the relative importance among sampled transitions, via learning the global context. Furthermore, although we have considered standard environments where evaluations are free, if there exists an environment where the total number of evaluations is restricted, it may be hard to calculate the replay reward in Eq.( 6) since cumulative rewards at each evaluation should be computed. Due to this reason, we consider a variance of NERS (NERS*) which computes the difference of cumulative rewards in not evaluations but training episodes. Figure 4 (b) and Figure 4 (c) show the performance of NERS* compared to NERS and other sampling methods under BipedalWalker-v3 and LunearLanderContinuous-v2, respectively. These figures show that the performance between the two types of replay rewards is not significantly different. Analysis on statistics of sampled transitions. We now check if NERS samples both meaningful and diverse transitions by examining how its sampling behavior changes during the training. To this end, we plot the TD-errors and Q-values for the sampled transitions during training on BipedalWalker-v3, Ant-v3, and Walker2D-v3 under SAC in Figure 5 . We can observe that NERS learns to focus on sampling transitions with high TD-errors in the early training steps, while it samples transitions with both high TD-errors and Q-values (diverse) at later training iterations. In the early training steps, the critic network for value estimation may not be well trained, rendering the excessive learning of the agent to be harmful, and thus it is reasonable that NERS selects transitions with high TD-errors to focus on updating critic networks (Figure 5(d-f )), while it focuses both on transitions with both high Q-values and TD-errors since both the critic and the actor will be reliable in the later stage (Figure 5(a-c )). Such an adaptive sampling strategy is a unique trait of NERS that contributes to its success, while other sampling methods, such as PER and ERO, cannot do so. Table 3 denotes the statistical values for sampled transitions' TD-errors and Q-values on Pendulum-v3 under SAC at 10,000 steps (with initially 1,000 random actions). It is observable that NERS has higher standard deviation of Q-values and TD-errors than RANDOM and ERO. Although PER has the highest standard deviation of TD-errors than other sampling methods, it has the lowest standard deviation of Q-values instead. Figure 5 and Table 3 show that NERS learns to sample diverse, which means the NERS's ability to sample transitions with different criteria, and meaningful experiences for agents.

4. RELATED WORK

Off-policy algorithms. One of the well-known off-policy algorithms is deep Q-network (DQN) learning with a replay buffer (Mnih et al., 2015) . There are various variants of the DQN learning, e.g., (Hasselt, 2010; Wang et al., 2015; Hessel et al., 2018) . Especially, Rainbow (Hessel et al., 2018) , which is one of the state-of-the-art Q-learning algorithms, was proposed by combining various techniques to extend the original DQN learning. Moreover, DQN was combined with a policy-based learning, so that various actor-critic algorithms have appeared. For instance, an actor-critic algorithm, which is called deep deterministic policy gradient (DDPG) (Lillicrap et al., 2016) , specialized for continuous control tasks was proposed by a combination of DPG (Silver et al., 2014) and deep Q-learning (Mnih et al., 2015) . Since DDPG is easy to brittle for hyper-parameters setting, various algorithms have been proposed to overcome this issue. For instance, to reduce the the overestimation of the Q-value in DDPG, twin delayed DDPG (TD3) was proposed (Fujimoto et al., 2018) , which extended DDPG by applying double Q-networks, target policy smoothing, and different frequencies to update a policy and Q-networks, respectively. Moreover, another actor-critic algorithm called soft actor-critic (SAC) (Haarnoja et al., 2018a; b) was developed by adding the entropy measure of an agent policy to the reward in the actor-critic algorithm to encourage the exploration of the agent. Sampling method. Due to the ease of applying random sampling, it has been used to various off-policy algorithms until now. However, it is known that it cannot guarantee optimal results, so that a prioritized experience replay (PER) (Schaul et al., 2016) that samples transitions proportionally to the TD-error in DQN learning was proposed. As a result, it showed performance improvements in Atari environments. Applying PER is also easily applicable to various policy-based algorithms, so it is one of the most frequently used rule-based sampling methods (Hessel et al., 2018; Hou et al., 2017; Schaul et al., 2016; Wang & Ross, 2019) . Furthermore, since it is reported that the newest experiences are significant for efficient Q-learning (Zhang & Sutton, 2015) , PER generally imposes the maximum priority on recent transitions to sample them frequently. Based on PER, imposing weights for recent transitions was also suggested (Brittain et al., 2019) to increase priorities for them. Instead of TD-error, a different metric can be also used to PER, e.g., the expected return (Isele & Cosgun, 2018; Jaderberg et al., 2016) . Meanwhile, different approaches from PER have been proposed. For instance, to update the policy in a trust region, computing the importance weight of each transitions was proposed (Novati & Koumoutsakos, 2019) , so far-policy experiences were ignored when computing the gradient. Another example is backward updating of transitions from a whole episode (Lee et al., 2019) for deep Q-learning. Although the rule-based methods have shown their effectiveness on some tasks, they sometimes derive sub-optimal results on other tasks. To overcome this issue, a neural network for replay buffer sampling was adopted (Zha et al., 2019) and it showed the validness of their method on some continuous control tasks in the DDPG algorithm. However, its effectiveness is arguable in other tasks and algorithms (see Section 3), as it only considers transitions independently and regard few features as timesteps, rewards, and TD-errors (unlike ours). Recently, Fedus et al. (2020) showed that increasing replay capacity and downweighting the oldest transition in the buffer generally improves the performance of Q-learning agents on Atari tasks. How to sample prior experiences is also a crucial issue to model-based RL algorithms, e.g., Dyna Sutton (1991) which is a classical architecture. There are variants of Dyna that study strategies for search-control, to selects which states to simulate. For instance, inspired by the fact that a high-frequency space requires many samples to learn, Dyna-Value Pan et al. (2019) and Dyna-Frequency Pan et al. (2020) select states with high-frequency hill climbing on value function, and gradient and hessian norm of it, respectively for generating more samples by the models. In other words, how to prioritize transitions when sampling is nontrivial, and learning the optimal sampling strategy is critical for the sample-efficiency of the target off-policy algorithm.

5. CONCLUSION

We proposed NERS, a neural policy network that learns how to select transitions in the replay buffer to maximize the return of the agent. It predicts the importance of each transition in relation to others in the memory, while utilizing local and global contexts from various features in the sampled transitions as inputs. We experimentally validate NERS on benchmark tasks for continuous and discrete control with various off-policy RL methods, whose results show that it significantly improves the performance of existing off-policy algorithms, with significant gains over prior rule-based and learning-based sampling policies. We further show through ablation studies that this success is indeed due to modeling relative importance with consideration of local and global contexts.

A.3 DISCRETE CONTROL ENVIRONMENT

To evaluate sampling methods under Rainbow Hessel et al. (2018) , we consider the following Atari environments. RL agents should learn their policy by observing the RGB screen to acheive high scores for each game. Alien(NoFrameskip-v4) is a game where player should destroy all alien eggs in the RGB screen with escaping three aliens. The player has a weapon which paralyzes aliens. Amidar(NoFrameskip-v4) is a game which format is similar to MsPacman. RL agents control a monkey in a fixed rectilinear lattice to eat pellets as much as possible while avoiding chasing masters. The monkey loses one life if it contacts with monsters. The agents can go to the next stage by visiting a certain location in the screen. Assault(NoFrameskip-v4) is a game where RL agents control a spaceship. The spaceship is able to move on the bottom of the screen and shoot motherships which deploy smaller ships to attack the agents. The objective is to eliminate the enemies. Asterix(NoFrameskip-v4) is a game to control a tornado. The objective of RL agents is to eat hamburgers in the screen with avoiding dynamites. Boxing(NoFrameskip-v4) is a game about the sport of boxing. There are two boxers with a topdown view and RL agents should control one of them. They get one point if their punches hit from a long distance and two points if their punches hit from a close range. A match is finished after two minues or 100 punches hitted to the opponent. ChopperCommand(NoFrameskip-v4) is a game to control a helicopter in a desert. The helicopter should destroy all enemy aircrafts and helicopters while protecting a convoy of trucks. (e) BattleZone (f) Boxing (g) ChopperCommand Freeway(NoFrameskip-v4) is a game where RL agents control chickens to run across a ten-lane highway with traffic. They are only allowed to move up or down. The objective is to get across as possible as they can until two minutes. We also use the hyper-parameters for experience replay optimization (ERO) used in Zha et al. (2019) . Since NERS is interpreted as an extension of PER, it basically shares hyper-parameters in PER, e.g., α and β. NERS uses various features, e.g., TD-errors and Q-values, but the newest samples have unknown Q-values and TD-errors before sampling them to update agents policy. Accordingly, we normalize Q-values and TD-errors by taking the hyperbolic tangent function and set 1.0 for the newest samples' TD-errors and Q-values. Furthermore, notice that NERS uses both current and next states in a transition as features, so that we adopt CNN-layers in NERS for Atari environments as in van Hasselt et al. (2019) . After flattening and reducing the output of the CNN-layers by FC-layers (256-64-32) , we make a vector by concatenating the reduced output with the other features. Then the vector is input of both local and global networks f l and f g . In the case of ERO, it does not use states as features, so that CNN-layers are unneccesary. Our objective is not to achieve maximal performance but compare sampling methods. Accordingly, to evaluate sampling methods on Atari environments, we conduct experiments until 100,000 steps as in van Hasselt et al. (2019) although there is room for better performance if more learning. In the case of continuous control environments, we conduct experiments until 500,000 steps.

Figure C

.1 shows additional continuous control environments: Ant, Walker2d, and Hopper under TD3 and SAC, respectively. All tasks possess have high-dimensional observation and action spaces (see We believe that in spite of the effectiveness of PER under Rainbow, the poor performance of PER under policy-based RL algorithms results from that it is specialized to update Q-newtorks, so that the actor networks cannot be efficiently trained. One can observe that there are high variances in some environments. Indeed, it is known that learning more about environments in .2 improves performance of algorithms. However, our focus is not to obtain the high performance but to compare the speed of learning according to the sampling methods under the same off-policy algorithms, so we will not spend more timesteps.



Code is available at https://github.com/youngmin0oh/NERS Pendulum * : We slightly modify the original Pendulum that openAI Gym supports to distinguishing performances of sampling methods more clearly by making rewards sparser. Its detailed description is given in the supplementary material. Learning curves for each environment are provided in the supplementary material. https://gym.openai.com/ https://github.com/openai/baselines



Figure 1: Sampled transitions on Pendulum-v0 from various sampling strategies: (a) Sampling by TD-error, (b)

(c), where the rods are in the horizontal positions which are necessary for training the agents as they provide the trajectory between the two types of states. However, the transitions are occasionally irrelevant for training both the policy and the Q networks. Indeed, states in the red boxes of Figure1(c) possess both low Q-values and TD-errors. Their low TD-errors suggest that they are not meaningful for the update of Q networks. Similarly, low Q-values cannot be used to train the policy what good actions are.

Figure 2: An overview of our neural experience replay sampler (NERS) framework. We first sample transitions proportionally to scores previously calculated. Then, our neural sampling policy evaluates them. Specifically, NERS consists of three networks f l , fg and fs. The first two networks obtain local and global contexts by considering various features, respectively. Then the last network evaluates the relative importance (score) by fs. The importance set is used when to sample transitions later and train the agent. This design satisfies the permutation-equivariant property.

Figure 3: Learning curves of off-policy RL algorithms with various sampling methods on classical and Box2D continuous control tasks. The solid line and shaded regions represent the mean and standard deviation, respectively, across five runs with random seeds.

Figure 4: (a): Comparison of NERS and variants of NERS only with few features (reward, TD-error, and timestep) and without global context across five instances with random seeds, respectively. (b)-(c): Learning curves under SAC over five instances with random seeds across five instances with random seeds, respectively.Here, NERS* denotes that a variant of NERS, where it is trained by the difference of cumulative rewards from each training episode. Any significant difference between NERS and NERS* is not observable.

-v4) is a tank combat game. This game provides a first-person perspective view. RL agents control a tank to destroy other tanks. The agent should avoid other tanks or missile attacks. It is also possible to hide from various obstacles and avoid enemy attacks.

Figure C.1: Learning curves of off-policy RL algorithms with various sampling methods on MuJoCo tasks.The solid line and shaded regions represent the mean and standard deviation, respectively, across five instances.

Figure C.2: Learning curves on additional Atari environments under Rainbow

Figure C.1 and Figure C



• • • , n} by[n]  for positive integer n. Without loss of generality, suppose that the replay buffer B stores the following information as its i-th transition B i = s κ(i) , a κ(i) , r κ(i) , s κ(i)+1 where κ (i) is a function from the index of B to a corresponding timestep. We use a set of priorities P B = σ 1 , • • • , σ |B| that is updated whenever sampling transitions for training the actor and critic. One can sample an index set I in [|B|] with the probability p i of i-th transition as follows:

Algorithm 1 Training NERS: batch size m and sample size n Initialize NERS parameters φ, a replay buffer B ← ∅, priority set P B ← ∅, and index set I ← ∅ for each timestep t do Choose a t from the actor and collect a sample (s t , a t , r t , s t+1 ) from the environment Update replay buffer B ← B ∪ {(s t , a t , r t , s t+1 )} and priority set P B ← P B ∪ {1.0} for each gradient step do Sample an index I by the given set P B and Eq. (1) with |I| = m Calculate a score set {σ k } k∈I and weights {w i } i∈I by Eq. (

Table 1 and Table 2 show the mean of cumulative rewards on MuJoCo and Atari environments after 500,000 and 100,000 training steps, Average of cumulative rewards under SAC and TD3 on MuJoCo Environments after 500,000 training steps across five instances with random seeds. Bold values represent the highest results, and the number in a bracket indicates the improvement due to NERS, compared to that of the best baseline on each environment.

Average of cumulative rewards under Rainbow on each Atari environments after 100,000 training steps across five instances. Bold values represent the highest results, and the number in a bracket indicates the improvement due to NERS, compared to that of the best baseline on each environment.

(a) also shows the effectMethodSTDEV of TD-errors STDEV of Q-values AVG of TD-errors AVG of Q-values Sampled transitions' statistical values for Q-values and TD-errors on Pendulum-v0 under SAC at 10,000 training steps with initially 1,000 random actions. Here, STDEV and AVG mean the standard deviation and the average, respectively. PER has the highest STDEV of TD-errors but lowest STDEV of Q-errors. NERS has higher STDEV of both TD-errors and Qvalues than RANDOM and ERO.

Table A.1). One can show that NERS outperforms other sampling methods at most cases. Moreover, one can observe that RANDOM and ERO have almost similar performance and PER could not show

annex

 2012) is a physics engine for robot simulations supported by openAI gym 4 . MuJoCo environments provide a robot with multiple joints and reinforcement learning (RL) agents should control the joints (action) to achieve a given goal.The observation of each environment basically includes information about the angular velocity and position for those joints. In this paper, we consider the following environments belonging to MuJoCo.Hopper(-v3) is a environment to control a one-legged robot. The robot receives a high return if it hops forward as soon as possible without failure.

Walker2d(-v3

) is an environment to make a two-dimensional bipedal legs to walk. Learning to quick walking without failure ensures a high return.Ant (-v3 ) is an environment to control a creature robot with four legs used to move. RL agents should to learn how to use four legs for moving forward quickly to get a high return. Although MuJoCo environments are popular to evaluate RL algorithms, openAI gym also supports additional continuous control environments which belong to classic or Box2D simulators. We conduct experiments on the following environments among them.Pendulum * is an environment which objective is to balance a pendulum in the upright position to get a high return. Each observation represents the angle and angular velocity. An action is a joint effort which range is [-2, 2] . Pendulum * is slightly modified from the original (Pendulum-v0) which openAI supports. The only difference from the original is that agents receive a reward 1.0 only if the rod is in sufficiently upright position (between the angle in [-π/3, π/3], where the zero angle means that the rod is in completely upright position) at least more than 20 steps.

LunarLander(Continuous-v2

) is an environment to control a lander. The objective of the lander is landing to a pad, which is located at coordinates (0, 0), with safety and coming to rest as soon as possible. There is a penalty if the lander crashes or goes out of the screen. An action is about parameters to control engines of the lander.Published as a conference paper at ICLR 2021 BipedalWalker(-v3) is an environment to control a robot. The objective is to make the robot move forward far from the initial state as far as possible. An observation is information about hull angle speed, angular velocity, vertical speed, horizontal speed, and so on. An action consists of torque or velocity control for two hips and two knees. .1: Dimensions of observation and action spaces for continuous control environmentsFrostbite(NoFrameskip-v4) is a game to control a man who should collect ice blocks to make his igloo. The bottom two thirds of the screen consists of four rows of horizontal ice blocks. He can move from the current row to another and obtain an ice block by jumping. RL agents are required to collect 15 ice blocks while avoiding some opponents, e.g., crabs and birds.KungFuMaster(NoFrameskip-v4) is a game to control a fighter to save his girl friend. He can use two types of attacks (punch and kick) and move/crunch/jump actions.MsPacman(NoFrameskip-v4) is a game where RL agents control a pacman in given mazes for eatting pellets as much as possible while avoiding chasing masters. The pacman loses one life if it contacts with monsters. Pong(NoFrameskip-v4) is a game about table tennis. RL agents control an in-game paddle to hit a ball back and forth. The objective is to gain 11 points before the opponent. The agents earn each point when the opponent fails to return the ball.PrivateEye(NoFrameskip-v4) is a game mixing action, adventure, and memorizationm which control a private eye. To solve five cases, the private eye should find and return items to suitable places.Qbert(NoFrameskip-v4) is a game where RL agents control a character under a pyramid made of 28 cubes. The character should change the color of all cubes while avoiding obstacles and enemies.RoadRunner(NoFrameskip-v4) is a game to control a roadrunner (chaparral bird). The roadrunner runs to the left on the road. RL agents should pick up bird seeds while avoiding a chasing coyote and obstacles such as cars.Seaquest(NoFrameskip-v4) is a game to control a submarine to rescue divers. It can also attack enemies by missiles. 

