RETURN-BASED CONTRASTIVE REPRESENTATION LEARNING FOR REINFORCEMENT LEARNING

Abstract

Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. In low data regime, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.

1. INTRODUCTION

Deep reinforcement learning (RL) algorithms can learn representations from high-dimensional inputs, as well as learn policies based on such representations to maximize long-term returns simultaneously. However, deep RL algorithms typically require large numbers of samples, which can be quite expensive to obtain (Mnih et al., 2015) . In contrast, it is usually much more sample efficient to learn policies with learned representations/extracted features (Srinivas et al., 2020) . To this end, various auxiliary tasks have been proposed to accelerate representation learning in aid of the main RL task (Suddarth and Kergosien, 1990; Sutton et al., 2011; Gelada et al., 2019; Bellemare et al., 2019; Franc ¸ois-Lavet et al., 2019; Shen et al., 2020; Zhang et al., 2020; Dabney et al., 2020; Srinivas et al., 2020) . Representative examples of auxiliary tasks include predicting the future in either the pixel space or the latent space with reconstruction-based losses (e.g., Jaderberg et al., 2016; Hafner et al., 2019a; b) . Recently, contrastive learning has been introduced to construct auxiliary tasks and achieves better performance compared to reconstruction based methods in accelerating RL algorithms (Oord et al., 2018; Srinivas et al., 2020) . Without the need to reconstruct inputs such as raw pixels, contrastive learning based methods can ignore irrelevant features such as static background in games and learn more compact representations. Oord et al. (2018) propose a contrastive representation learning method based on the temporal structure of state sequence. Srinivas et al. (2020) propose to leverage the prior knowledge from computer vision, learning representations that are invariant to image augmentation. However, existing works mainly construct contrastive auxiliary losses in an unsupervised manner, without considering feedback signals in RL problems as supervision. In this paper, we take a further step to leverage the return feedback to design a contrastive auxiliary loss to accelerate RL algorithms. Specifically, we propose a novel method, called Return-based Contrastive representation learning for Reinforcement Learning (RCRL). In our method, given an anchor state-action pair, we choose a state-action pair with the same or similar return as the positive sample, and a state-action pair with different return as the negative sample. Then, we train a discriminator to classify between positive and negative samples given the anchor based on their representations as the auxiliary task. The intuition here is to learn state-action representations that capture return-relevant features while ignoring return-irrelevant features. From a theoretical perspective, RCRL is supported by a novel state-action abstraction, called Z πirrelevance. Z π -irrelevance abstraction aggregates state-action pairs with similar return distributions under certain policy π. We show that Z π -irrelevance abstraction can reduce the size of the stateaction space (cf. Appendix A) as well as approximate the Q values arbitrarily accurately (cf. Section 4.1). We further propose a method called Z-learning that can calculate Z π -irrelevance abstraction with sampled returns rather than the return distribution, which is hardly available in practice. Zlearning can learn Z π -irrelevance abstraction provably efficiently. Our algorithm RCRL can be seen as the empirical version of Z-learning by making a few approximations such as integrating with deep RL algorithms, and collecting positive pairs within a consecutive segment in a trajectory of the anchors. We conduct experiments on Atari games (Bellemare et al., 2013) and DeepMind Control suite (Tassa et al., 2018) in low data regime. The experiment results show that our auxiliary task combined with Rainbow (Hessel et al., 2017) for discrete control tasks or SAC (Haarnoja et al., 2018) for continuous control tasks achieves superior performance over other state-of-the-art baselines for this regime. Our method can be further combined with existing unsupervised contrastive learning methods to achieve even better performance. We also perform a detailed analysis on how the representation changes during training with/without our auxiliary loss. We find that a good embedding network assigns similar/dissimilar representations to state-action pairs with similar/dissimilar return distributions, and our algorithm can boost such generalization and speed up training. Our contributions are summarized as follows: • We introduce a novel contrastive loss based on return, to learn return-relevant representations and speed up deep RL algorithms. • We theoretically build the connection between the contrastive loss and a new form of stateaction abstraction, which can reduce the size of the state-action space as well as approximate the Q values arbitrarily accurately. • Our algorithm achieves superior performance against strong baselines in Atari games and DeepMind Control suite in low data regime. Besides, the performance can be further enhanced when combined with existing auxiliary tasks.

2.1. AUXILIARY TASK

In reinforcement learning, the auxiliary task can be used for both the model-based setting and the model-free setting. In the model-based settings, world models can be used as auxiliary tasks and lead to better performance, such as CRAR (Franc ¸ois-Lavet et al., 2019), Dreamer (Hafner et al., 2019a) , and PlaNet (Hafner et al., 2019b) . Due to the complex components (e.g., the latent transition or reward module) in the world model, such methods are empirically unstable to train and relies on different regularizations to converge. In the model-free settings, many algorithms construct various auxiliary tasks to improve performance, such as predicting the future (Jaderberg et al., 2016; Shelhamer et al., 2016; Guo et al., 2020; Lee et al., 2020; Mazoure et al., 2020) , learning value functions with different rewards or under different policies (Veeriah et al., 2019; Schaul et al., 2015; Borsa et al., 2018; Bellemare et al., 2019; Dabney et al., 2020) , learning from many-goals (Veeriah et al., 2018) , or the combination of different auxiliary objectives (de Bruin et al., 2018) . Moreover, auxiliary tasks can be designed based on the prior knowledge about the environment (Mirowski et al., 2016; Shen et al., 2020; van der Pol et al., 2020) or the raw state representation (Srinivas et al., 2020) . Hessel et al. (2019) also apply auxiliary task to the multi-task RL setting. Contrastive learning has seen dramatic progress recently, and been introduced to learn state representation (Oord et al., 2018; Sermanet et al., 2018; Dwibedi et al., 2018; Aytar et al., 2018; Anand et al., 2019; Srinivas et al., 2020) . Temporal structure (Sermanet et al., 2018; Aytar et al., 2018) and local spatial structure (Anand et al., 2019) has been leveraged for state representation learning via contrastive losses. CPC (Oord et al., 2018) and CURL (Srinivas et al., 2020) Li et al., 2006) , which keep the Q function invariant under any policy π or the optimal policy respectively. There are also some works on state-action abstractions, e.g., MDP homomorphism (Ravindran, 2003; Ravindran and Barto, 2004a) and approximate MDP homomorphism (Ravindran and Barto, 2004b; Taylor et al., 2009) , which are similar to bisimulation in keeping reward and transition invariant, but extending bisimulation from state abstraction to stateaction abstraction. In this paper, we consider a new form of state-action abstraction Z π -irrelevance, which aggregates state-action pairs with the same return distribution and is coarser than bisimulation or homomorphism which are frequently used as auxiliary tasks (e.g., Biza and Platt, 2018; Gelada et al., 2019; Zhang et al., 2020) . However, it is worth noting that Z π -irrelevance is only used to build the theoretical foundation of our algorithm, and show that our proposed auxiliary task is well-aligned with the main RL task. Representation learning in deep RL is in general very different from aggregating states in tabular case, though the latter may build nice theoretical foundation for the former. Here we focus on how to design auxiliary tasks to accelerate representation learning using contrastive learning techniques, and we propose a novel return-based contrastive method based on our proposed Z π -irrelevance abstraction.

3. PRELIMINARY

We consider a Markov Decision Process (MDP) which is a tuple (S, A, P, R, µ, γ) specifying the state space S, the action space A, the state transition probability P (s t+1 |s t , a t ), the reward R(r t |s t , a t ), the initial state distribution µ ∈ ∆ S and the discount factor γ. Also, we denote x := (s, a) ∈ X := S × A to be the state-action pair. A (stationary) policy π : S → ∆ A specifies the action selection probability on each state. Following the policy π, the discounted sum of future rewards (or return) is denoted by the random variable Z π (s, a) = ∞ t=0 γ t R(s t , a t ), where s 0 = s, a 0 = a, s t ∼ P (•|s t-1 , a t-1 ), and a t ∼ π(•|s t ). We divide the range of return into K equal bins {R 0 = R min , R 1 , • • • , R K = R max } such that R k -R k-1 = (R max -R min )/K, ∀k ∈ [K], where R min (resp. R max ) is the minimum (reps. maximum) possible return, and [K] := {1, 2, • • • , K}. We use b(R) = k ∈ [K] to denote the event that R falls into the kth bin, i.e., R k-1 < R ≤ R k . Hence, b(R) can be viewed as the discretized version of the return, and the distribution of discretized return can be represented by a K-dimensional vector Z π (x) ∈ ∆ K , where the k-th element equals to Pr [R k-1 < Z π (x) ≤ R k ]. The Q function is defined as Q π (x) = E[Z π (x)], and the state value function is defined as V π (s) = E a∼π(•|s) [Z π (s, a)]. The objective for RL is to find a policy π that maximizes the expected cumulative reward J(π) = E s∼µ [V π (s)]. We denote the optimal policy as π * and the corresponding optimal Q function as Q * := Q π * .

4. METHODOLOGY

In this section, we present our method, from both theoretical and empirical perspectives. First, we propose Z π -irrelevance, a new form of state-action abstraction based on return distribution. We show that the Q functions for any policy (and therefore the optimal policy) can be represented under Algorithm 1: Z-learning 1: Given the policy π, the number of bins for the return K, a constant N ≥ N π,K , the encoder class Φ N , the regressor class W N , and a distribution d ∈ ∆ X with supp (d) = X 2: D = ∅ 3: for i = 1, • • • , n do 4: x 1 , x 2 ∼ d 5: R 1 ∼ Z π (x 1 ), R 2 ∼ Z π (x 2 ) 6: D = D ∪ {(x 1 , x 2 , y = I[b(R 1 ) = b(R 2 )])} 7: end for 8: ( φ, ŵ) = arg min φ∈Φ N ,w∈W N L(φ, w; D) , where L(φ, w; D) is defined in (1) 9: return the encoder φ Z π -irrelevance abstraction. Then we consider an algorithm, Z-learning, that enables us to learn Z π -irrelevance abstraction from the samples collected using π. Z-learning is simple and learns the abstraction by only minimizing a contrastive loss. We show that Z-learning can learn Z π -irrelevance abstraction provably efficiently. After that, we introduce return-based contrastive representation learning for RL (RCRL) that incorporates standard RL algorithms with an auxiliary task adapted from Z-learning. At last, we present our network structure for learning state-action embedding, upon which RCRL is built.

4.1. Z π -IRRELEVANCE ABSTRACTION

A state-action abstraction aggregates the state-action pairs with similar properties, resulting in an abstract state-action space denoted as [N ], where N is the size of abstract state-action space. In this paper, we consider a new form of abstraction, Z π -irrelevance, defined as follows: Given a policy π, Z π -irrelevance abstraction is denoted as φ : X → [N ] such that, for any x 1 , x 2 ∈ X with φ(x 1 ) = φ(x 2 ), we have Z π (x 1 ) = Z π (x 2 ). Given a policy π and the parameter for return discretization K, we use N π,K to denote the minimum N such that a Z π -irrelevance exists. It is true that N π,K ≤ N π,∞ ≤ |φ B (S)| |A| for any π and K, where |φ B (S)| is the number of abstract states for the coarsest bisimulation (cf. Appendix A). Proposition 4.1. Given a policy π and any Z π -irrelevance φ : X → [N ], there exists a function Q : [N ] → R such that |Q(φ(x)) -Q π (x)| ≤ Rmax-Rmin K , ∀x ∈ X . We provide a proof in Appendix A. Note that K controls the coarseness of the abstraction. When K → ∞, Z π -irrelevance can accurately represent the value function and therefore the optimal policy when π → π * . When using an auxiliary task to learn such abstraction, this proposition indicates that the auxiliary task (to learn a Z π -irrelevance) is well-aligned with the main RL task (to approximate Q * ). However, large K results in a fine-grained abstraction which requires us to use a large N and more samples to learn the abstraction (cf. Theorem 4.1). In practice, this may not be a problem since we learn a state-action representation in a low-dimensional space R d instead of [N ] and reuse the samples collected by the base RL algorithm. Also, we do not need to choose a K explicitly in the practical algorithm (cf. Section 4.3).

4.2. Z-LEARNING

We propose Z-learning to learn Z π -irrelevance based on a dataset D with a contrastive loss (see Algorithm 1). Each tuple in the dataset is collected as follows: First, two state-action pairs are drawn i.i.d. from a distribution d ∈ ∆ X with supp(d) = X (cf. Line 4 in Algorithm 1). In practice, we can sample state-action pairs from the rollouts generated by the policy π. In this case, a stochastic policy (e.g., using -greedy) with a standard ergodic assumption on MDP ensures supp(d) = X . Then, we obtain returns for the two state-action pairs (i.e., the discounted sum of the rewards after x 1 and x 2 ) which can be obtained by rolling out using the policy π (cf. Line 5 in Algorithm 1). The binary label y for this state-action pair indicates whether the two returns belong to the same bin (cf. Line 6 in Algorithm 1). The contrastive loss is defined as follows: min φ∈Φ N ,w∈W N L(φ, w; D) := E (x1,x2,y)∼D (w(φ(x 1 ), φ(x 2 )) -y) 2 , where the class of encoders that map the state-action pairs to N discrete abstractions is defined as Φ N := {X → [N ]} , and the class of tabular regressors is defined as W N := {[N ] × [N ] → [0, 1]}. Notice that we choose N ≥ N π,K to ensure that a Z π -irrelevance φ : X → [N ] exists. Also, to aggregate the state-action pairs, N should be smaller than |X | (otherwise we will obtain an identity mapping). In this case, mapping two state-action pairs with different return distributions to the same abstraction will increase the loss and therefore is avoided. The following theorem shows that Z-learning can learn Z π -irrelevance provably efficiently. Theorem 4.1. Given the encoder φ returned by Algorithm 1, the following inequality holds with probability 1 -δ and for any x ∈ X : E x1∼d,x2∼d I[ φ(x 1 ) = φ(x 2 )] Z π (x ) T (Z π (x 1 ) -Z π (x 2 )) ≤ 8N n 3 + 4N 2 ln n + 4 ln |Φ N | + 4 ln( 2 δ ) , where |Φ N | is the cardinality of encoder function class and n is the size of the dataset. We provide the proof in Appendix B. Although |Φ N | is upper bounded by N |X | , it is generally much smaller for deep encoders that generalize over the state-action space. The theorem shows that whenever φ maps two state-actions x 1 , x 2 to the same abstraction, Z π (x 1 ) ≈ Z π (x 2 ) up to an error proportional to 1/ √ n (ignoring the logarithm factor). The following corollary shows that φ becomes a Z π -irrelevance when n → ∞. Corollary 4.1.1. The encoder φ returned by Algorithm 1 with n → ∞ is a Z π -irrelevance, i.e., for any x 1 , x 2 ∈ X , Z π (x 1 ) = Z π (x 2 ) if φ(x 1 ) = φ(x 2 ).

4.3. RETURN-BASED CONTRASTIVE LEARNING FOR RL (RCRL)

We adapt Z-learning as an auxiliary task that helps the agent to learn a representation with meaningful semantics. The auxiliary task based RL algorithm is called RCRL and shown in Algorithm 2. Here, we use Rainbow (for discrete control) and SAC (for continuous control) as the base RL algorithm for RCRL. However, RCRL can also be easily incorporated with other model-free RL algorithms. While Z-learning relies on a dataset sampled by rolling out the current policy, RCRL constructs such a dataset using the samples collected by the base RL algorithm and therefore does not require additional samples, e.g., directly using the replay buffer in Rainbow or SAC (see Line 7 and 8 in Algorithm 2). Compared with Z-learning, we use the state-action embedding network that is shared with the base RL algorithm φ : X → R d as the encoder, and use an additional discriminator trained by the auxiliary task w : R d × R d → [0, 1] as the regressor. However, when implementing Z-learning as the auxiliary task, the labels in the dataset may be unbalanced. Although this does not cause problems in the theoretical analysis since we assume the Bayes optimizer can be obtained for the contrastive loss, it may prevent the discriminator from learning properly in practice (cf. Line 8 in Algorithm 1). To solve this problem, instead of drawing samples independently from the replay buffer B (analogous to sampling from the distribution d in Z-learning), we sample the pairs for D as follows: As a preparation, we cut the trajectories in B into segments, where each segment contains state-action pairs with the same or similar returns. Specifically, in Atari games, we create a new segment once the agent receives a non-zero reward. In DMControl tasks, we first prescribe a threshold and then create a new segment once the cumulative reward within the current segment exceeds this threshold. For each sample in D, we first draw an anchor state-action pair from B randomly. Afterwards, we generate a positive sample by drawing a state-action pair from the same segment of the anchor state-action pair. Then, we draw another state-action pair randomly from B and use it as the negative sample. We believe our auxiliary task may boost the learning, due to better return-induced representations that facilitate generalization across different state-action pairs. Learning on one state-action pair will affect the value of all state-action pairs that share similar representations with this pair. When the embedding network assigns similar representations to similar state-action pairs (e.g., sharing similar distribution over returns), the update for one state-action pair is representative for the updates for other similar state-action pairs, which improves sample efficiency. However, such generalization may not be achieved by the base RL algorithm since, when trained by the algorithm with only a Algorithm 2: Return based Contrastive learning for RL (RCRL) Initialize the embedding φ θ : X → R d and a discriminator w ϑ : R d × R d → [0, 1] Initialize the parameters ϕ for the base RL algorithm that uses the learned embedding φ θ Given a batch of samples D, the loss function for the base RL algorithm is L RL (φ θ , ϕ; D) A replay buffer B = ∅ foreach iteration do Rollout the current policy and store the samples to the replay buffer B Draw a batch of samples D from the replay buffer B Update the parameters with the loss function L(φ θ , w ϑ ; D) + L RL (φ θ , ϕ; D) end return The learned policy return-based loss, similar state-action pairs may have similar Q values but very different representations. One may argue that, we can adopt several temporal difference updates to propagate the values for state-action pairs with same sampled return, and finally all such pairs are assigned with similar Q values. However, since we adopt a learning algorithm with bootstrapping/temporal difference learning and frozen target network in deep RL, it could take longer time to propagate the value across different state-action pairs, compared with direct generalization over state-action pairs with contrastive learning. Meanwhile, since we construct auxiliary tasks based on return, which is a very different structure from image augmentation or temporal structure, our method could be combined with existing methods to achieve further improvement.

4.4. NETWORK STRUCTURE FOR STATE-ACTION EMBEDDING

In our algorithm, the auxiliary task is based on the state-action embedding, instead of the state embedding that is frequently used in the previous work (e.g., Srinivas et al., 2020) . To facilitate our algorithm, we design two new structures for Atari 2600 Games (discrete action) and DMControl Suite (continuous action) respectively. We show the structure in Figure 1 . For Atari, we learn an action embedding for each action and use the element-wise product of the state embedding and action embedding as the state-action embedding. For DMControl, the action embedding is a realvalued vector and we use the concatenation of the action embedding and the state embedding. 

5. EXPERIMENT

In this section, we conduct the following experiments: 1) We implement RCRL on Atari 2600 Games (Bellemare et al., 2013) and DMControl Suite (Tassa et al., 2020) , and compare with stateof-the-art model-free methods and strong model-based methods. In particular, we compare with CURL (Srinivas et al., 2020) , a top performing auxiliary task based RL algorithm for pixel-based control tasks that also uses a contrastive loss. In addition, we also combine RCRL with CURL to study whether our auxiliary task further boosts the learning when combined with other auxiliary tasks. 2) To further study the reason why our algorithm works, we analyze the generalization of the learned representation. Specifically, we compare the cosine similarity between the representations of different state-action pairs. We provide the implementation details in Appendix C.

5.1. EVALUATION ON ATARI AND DMCONTROL

Experiments on Atari. Our experiments on Atari are conducted in low data regime of 100k interactions between the agent and the environment, which corresponds to two hours of human play. We show the performance of different algorithms/baselines, including the scores for average human (Human), SimPLe (Kaiser et al., 2019) which is a strong model-based baseline for this regime, original Rainbow (Hessel et al., 2017) , Data-Efficient Rainbow (ERainbow, van Hasselt et al., 2019) , ERainbow with state-action embeddings (ERainbow-sa, cf. Figure 1 Left), CURL (Srinivas et al., 2020) that is based on ERainbow, RCRL which is based on ERainbow-sa, and the algorithm that combines the auxiliary loss for CURL and RCRL to ERainbow-sa (RCRL+CURL). We show the evaluation results of our algorithm and the baselines on Atari games in Table 1 . First, we observe that using state-action embedding instead of state embedding in ERainbow does not lead to significant performance change by comparing ERainbow with ERainbow-sa. Second, built upon ERainbow-sa, the auxiliary task in RCRL leads to better performance compared with not only the base RL algorithm but also SimPLe and CURL in terms of the median human normalized score (HNS). Third, we can see that RCRL further boosts the learning when combined with CURL and achieves the best performance for 7 out of 26 games, which shows that our auxiliary task can be successfully combined with other auxiliary tasks that embed different sources of information to learn the representation. Experiments on DMControl. For DMControl, we compare our algorithm with the following baselines: Pixel SAC (Haarnoja et al., 2018) which is the base RL algorithm that receives images as the input; SLAC (Lee et al., 2019) that learns a latent variable model and then updates the actor and critic based on it; SAC+AE (Yarats et al., 2019) that uses a regularized autoencoder for reconstruction in the auxiliary task; PlaNet (Hafner et al., 2019b) and Dreamer (Hafner et al., 2019a ) that learn a latent space world model and explicitly plan with the learned model. We also compare with a skyline, State SAC, the receives the low-dimensional state representation instead of the image as the input. Different from Atari games, tasks in DMControl yield dense reward. Consequently, we split the trajectories into segments using a threshold such that the difference of returns within each segment does not exceed this threshold. Similarly, we test the algorithms in low data regime of 500k interactions. We show the evaluation results in Figure 2 . We can see that our auxiliary task not only brings performance improvement over the base RL algorithm but also outperforms CURL and other state-of-the-art baseline algorithms in different tasks. Moreover, we observe that our algorithm is more robust across runs with different seeds compared with Pixel SAC and CURL (e.g., for the task Ball in cup, Catch).

5.2. ANALYSIS ON THE LEARNED REPRESENTATION

We analyze on the learned representation of our model to demonstrate that our auxiliary task attains a representation with better generalization, which may explain why our algorithm succeeds. We use cosine similarity to measure the generalization from one state-action pair to another in the deep learning model. Given two state-action pairs x 1 , x 2 ∈ X , cosine similarity is defined as φ θ (x1) T φ θ (x2) ||φ θ (x1)|| ||φ θ (x2)|| , where φ θ (•) is the learnt embedding network. We show the cosine similarity of the representations between positive pairs (that are sampled within the same segment and therefore likely to share similar return distributions) and negative pairs (i.e., randomly sampled state-action pairs) during the training on the game Alien in Figure 4 . First, we observe that when a good policy is learned, the representations of positive pairs are similar while those of negative pairs are dissimilar. This indicates that a good representation (or the representation that supports a good policy) aggregates the state-action pairs with similar return distributions. Then, we find that our auxiliary loss can accelerate such generalization for the representation, which makes RCRL learn faster. 

6. CONCLUSION

In this paper, we propose return-based contrastive representation learning for RL (RCRL), which introduces a return-based auxiliary task to facilitate policy training with standard RL algorithms. Our auxiliary task is theoretically justified to learn representations that capture the structure of Z πirrelevance, which can reduce the size of the state-action space as well as approximate the Q values arbitrarily accurately. Experiments on Atari games and the DMControl suite in low data regime demonstrate that our algorithm achieves superior performance not only when using our auxiliary task alone but also when combined with other auxiliary tasks, . As for future work, we are interested in how to combine different auxiliary tasks in a more sophisticated way, perhaps with a meta-controller. Another potential direction would be providing a theoretical analysis for auxiliary tasks and justifying why existing auxiliary tasks can speed up deep RL algorithms. A Z π -IRRELEVANCE A.1 COMPARISON WITH BISIMULATION. We consider a bisimulation φ and denote the set of abstract states as Z := φ(S). The bisimulation φ B is defined as follows: Definition A.1 (Bisimulation (Givan et al., 2003) ). φ B is bisimulation if ∀s 1 , s 2 ∈ S where φ B (s 1 ) = φ B (s 2 ), ∀a ∈ A, z ∈ Z, R(s 1 , a) = R(s 2 , a), s ∈φ -1 B (z ) P (s |s 1 , a) = s ∈φ -1 B (z ) P (s |s 2 , a). Notice that the coarseness of Z π -irrelevance is dependent on K, the number of bins for the return. When K → ∞, two state-action pairs x and x are aggregated only when Z π (x) D = Z π (x ) , which is a strict condition resulting in a fine-grained abstraction. Here, we provide the following proposition to illustrate that Z π -irrelevance abstraction is coarser than bisimulation even when K → ∞. Proposition A.1. Given φ B to be the coarsest bisimulation, φ B induces a Z π -irrelevance abstraction for any policy π defined over Z. Specifically, if ∀s 1 , s 2 ∈ S where φ B (s 1 ) = φ B (s 2 ), then Z π (s 1 , a) D = Z π (s 2 , a), ∀a ∈ A for any policy π defined over Z. Consider a state-action abstraction φB that is augmented from the coarsest bisimulation φ B : φB (s 1 , a 1 ) = φB (s 2 , a 2 ) if and only if φ B (s 1 ) = φ(s 2 ) and a 1 = a 2 . The proposition indi- cates that |φ B (S)| |A| = | φB (X )| ≥ N π,∞ ≥ N π,K for any K and for any π defined over Z, i.e., bisimulation is no coarser than Z π -irrelevance. Note that there exists an optimal policy that is defined over Z (Li et al., 2006) . In practice, Z π -irrelevance should be far more coarser than bisimulation when we only consider one specific policy π. Therefore learning a Z π -irrelevance should be easier than learning a bisimulation. Proof. First, for a fixed policy π : Z → ∆ A , we prove that if two state distributions projected to Z are identical, the corresponding reward distributions or the next state distributions projected to Z will be identical. Then, we use such invariance property to prove the proposition. The proof for the invariance property goes as follows: Consider two identical state distributions over Z on the t-th step, P 1 , P 2 ∈ ∆ Z such that P 1 = P 2 . Notice that we only require that the two state distributions are identical when projected to Z, so therefore they may be different on S. Specifically, if we denote the state distribution over S as P (s) = P (z)q(s|z) where φ B (s) = z, the distribution q for the two state distributions may be different. However, we will show that this is sufficient to ensure that the corresponding state distributions on Z are identical on the next step (and therefore the subsequent steps). We denote R 1 and R 2 as the reward on the t-th step, which are random variables. The reward distribution on the t-th step is specified as follows:  P (R 1 = r|P 1 ) = z∈Z, = R 2 |P 2 . D = R (2) 2 , • • • and therefore Z π (s 1 , a) D = Z π (s 2 , a), ∀a ∈ A.

A.2 COMPARISON WITH π-BISIMULATION

Similar to Z π -irrelevance, a recently proposed state abstraction π-bisimulation (Castro, 2020) is also tied to a behavioral policy π. It is beneficial to compare the coarseness of Z π -irrelevance and π-bisimulation. For completeness, we restate the definition of π-bisimulation. Definition A.2 (π-bisimulation (Castro, 2020)). Given a policy π, φ B,π is a π-bisimulation if ∀s 1 , s 2 ∈ S where φ B,π (s 1 ) = φ B,π (s 2 ), a π(a|s 1 )R(s 1 , a) = a π(a|s 2 )R(s 2 , a) a π(a|s 1 ) s ∈φ -1 B,π (z ) P (s |s 1 , a) = a π(a|s 2 ) s ∈φ -1 B,π (z ) P (s |s 2 , a), ∀z ∈ Z, where Z := φ B,π (S) is the set of abstract states. However, π-bisimulation does not consider the state-action pair that is not visited under the policy π (e.g., a state-action pair (s, a) when π(a|s) = 0), whereas Z π -irrelevance is defined on all the state-action pairs. Therefore, it is hard to build connection between them unless we also define Z π -irrelevance on the state space (instead of the state-action space) in a similar way. Definition A.3 (Z π -irrelevance on the state space). Given s ∈ S, we denote Z π (s) := a π(a|s)Z π (s, a). Given a policy π, φ is a Z π -irrelevance if ∀s 1 , s 2 ∈ S where φ(s 1 ) = φ(s 2 ), Z π (s 1 ) = Z π (s 2 ). Based on the above definitions, the following proposition indicates that such Z π -irrelevance is coarser than π-bisimulation. Proposition A.2. Given a policy π and φ B,π to be the coarsest π-bisimulation, if ∀s 1 , s 2 ∈ S where φ B,π (s 1 ) = φ B,π (s 2 ), then Z π (s 1 ) = Z π (s 2 ). Proof. Starting from s 1 and s 2 and following the policy π, the reward distribution and the state distribution over Z on each step are identical, which can be proved by induction. Then, we can conclude that Z π (s 1 ) D = Z π (s 2 ) and thus Z π (s 1 ) = Z π (s 2 ).

A.3 BOUND ON THE REPRESENTATION ERROR

For completeness, we restate Proposition 4.1. Proposition A.3. Given a policy π and any Z π -irrelevance φ : X → [N ], there exists a function Q : [N ] → R such that |Q(φ(x)) -Q π (x)| ≤ Rmax-Rmin K , ∀x ∈ X . Proof. Given a policy π and a Z π irrelevance φ, we can construct a Q such that Q(φ(x)) = Q π (x), ∀x ∈ X in the following way: For all x ∈ X , one by one, we assign Q(z) ← Q π (x), where z = φ(x). In this way, for any x ∈ X with z = φ(x), Q(z) = Q π (x ) for some x ∈ X such that Z π (x ) = Z π (x). This implies that |Q(z) - Q π (x)| = |Q π (x ) -Q π (x)| ≤ Rmax-Rmin K . This also applies to the optimal policy π * .

B PROOF

Notice that Corollary 4.1.1 is the asymptotic case (n → ∞) for Theorem 4.1. We first provide a sketch proof for Corollary 4.1.1 which ignores sampling issues and thus more illustrative. Later, we provide the proof for Theorem 4.1 which mainly follows the techniques used in Misra et al. (2019) . For the first step, it is helpful to find out the Bayes optimal predictor f * when size of the dataset is infinite. We notice the fact that the Bayes optimal predictor for a square loss is the conditional mean, i.e., given a distribution D, the Bayes optimal predictor f * = arg min f E (x,y)∼D [(f (x) -y) 2 ] satisfies f * (x ) = E (x,y)∼D [y | x = x ]. Using this property, we can obtain the Bayes optimal predictor over all the functions {X × X → [0, 1]} for our contrastive loss: f * (x 1 , x 2 ) = E (x 1 ,x 2 ,y)∼D [y | x 1 = x 1 , x 2 = x 2 ] = E R1∼Z π (x1),R2∼Z π (x2) I[b(R 1 ) = b(R 2 )] = 1 -Z π (x 1 ) T Z π (x 2 ), where we use D to denote the distribution from which each tuple in the dataset D is drawn. To establish the theorem, we require that such an optimal predictor f * to be in the function class F N . Following a similar argument to Proposition 10 in Misra et al. (2019) , it is not hard to show that using N > N π,K is sufficient for this realizability condition to hold. Corollary 4.1.1. The encoder φ returned by Algorithm 1 with n → ∞ is a Z π -irrelevance, i.e., for any x 1 , x 2 ∈ X , Z π (x 1 ) = Z π (x 2 ) if φ(x 1 ) = φ(x 2 ). Proof of Corollary 4.1.1. Considering the asymptotic case (i.e., n → ∞), we have f = f * where f (•, •) := ŵ( φ(•), φ(•)) and ŵ and φ is returned by Algorithm 1. If φ(x 1 ) = φ(x 2 ), we have for any x ∈ X , 1 -Z π (x 1 ) T Z π (x) = f * (x 1 , x) = f (x 1 , x) = f (x 2 , x) = f * (x 2 , x) = 1 -Z π (x 2 ) T Z π (x). We obtain Z π (x 1 ) = Z π (x 2 ) by letting x = x 1 or x = x 2 . 



Figure 1: The network structure of RCRL for Atari (left) and DMControl Suite (right).

Figure 2: Scores achieved by RCRL and other baseline algorithms during the training for different tasks in DMControl suite. The line and the shaded area indicate the average and the standard deviation over 5 random seeds respectively.

Figure 3: Analysis of the learned representation on Alien. (a) The cosine similarity between the representations of the positive/negative state-action pair and the anchor during the training of Rainbow and RCRL. (b) The scores of the two algorithms during the training.

a∈A,s∈S P (r|s, a)q 1 (s|z)π(a|z)P 1 (z) = z∈Z,a∈A P (r|z, a)π(a|z)P 1 (z) = z∈Z,a∈A P (r|z, a)π(a|z)P 2 (z) =P (R 2 = r|P 2 ), (3) for any r ∈ R, where we use the property of bisimulation, R|s, a D = R|s , a, ∀φ B (s) = φ B (s ), in the second equation. This indicates that R 1 |P 1 D

PROOF OF COROLLARY 4.1.1 Recall that Z-learning aims to solve the following optimization problem: min φ∈Φ N ,w∈W N L(φ, w; D) := E (x1,x2,y)∼D (w(φ(x 1 ), φ(x 2 )) -y) 2 , (5) which can also be regarded as finding a compound predictor f (•, •) := w(φ(•), φ(•)) over the function class F N := {(x 1 , x 2 ) → w(φ(x 1 ), φ(x 2 )) : w ∈ W N , φ ∈ Φ N }.

Figure 4: Analysis of the learned representation on Alien with five seeds. (a) The cosine similarity between the representations of the positive/negative state-action pair and the anchor during the training of Rainbow and RCRL. (b) The game scores of the two algorithms during the training.

adopt a contrastive auxiliary tasks to accelerate representation learning and speed up main RL tasks, by leveraging the temporal structure and image augmentation respectively. To the best of our knowledge, we are the first to leverage return to construct a contrastive auxiliary task for speeding up the main RL task.

Scores of different algorithms/baselines on 26 games for Atari-100k benchmark. We show the mean score averaged over five random seeds.

7. ACKNOWLEDGMENT

Guoqing Liu and Nenghai Yu are supported in part by the Natural Science Foundation of China under Grant U20B2047, Exploration Fund Project of University of Science and Technology of China under Grant YD3480002001. Chuheng Zhang and Jian Li are supported in part by the National Natural Science Foundation of China Grant 61822203, 61772297, 61632016 and the Zhongguancun Haihua Institute for Frontier Information Technology, Turing AI Institute of Nanjing and Xi'an Institute for Interdisciplinary Information Core Technology.

annex

Similarly, we denote P 1 and P 2 as the state distribution over Z on the (t + 1)-th step. We have for any z ∈ Z, where we also use the property of bisimulation in the second equation, i.e., the quantity in the bracket equals for different s such that q(s|z) > 0 with a fixed z. This indicates thatAt last, consider two state s 1 , s 2 ∈ S with φ B (s 1 ) = φ B (s 2 ) = z on the first step, and we take a ∈ A for both states on this step and later take the actions following π. We denote the subsequent reward and state distribution over Z as R(1)1 , R1 , P1 , • • • and R(1)2 , P2 , • • • respectively, where the superscripts indicate time steps. Therefore, we can deduce that PTheorem 4.1. Given the encoder φ returned by Algorithm 1, the following inequality holds with probability 1 -δ and for any x ∈ X :where |Φ N | is the cardinality of encoder function class and n is the size of the dataset.The theorem shows that 1) whenever φ maps two state-actions x 1 , x 2 to the same abstraction, Z π (x 1 ) ≈ Z π (x 2 ) up to an error proportional to 1/ √ n (ignoring the logarithm factor), and 2) when the difference of the return distributions (e.g., Z π (x 1 ) -Z π (x 2 )) is large, the chance that two state-action pairs (e.g., x 1 and x 2 ) are assigned with the same encoding is small. However, since state-action pairs are sampled i.i.d. from the distribution d, the bound holds in an average sense instead of the worse-case sense.The overview of the proof is as follows: Note that the theorem builds connection between the optimization problem defined in equation 5 and the learned encoder φ. To prove the theorem, we first find out the minimizer f * of the optimization problem when the number of samples is infinite (which was calculated in equation 6 in the previous subsection). Then, we bound the difference between the learned predictor f and f * in Lemma B.2 (which utilizes Lemma B.1) when the number of samples is finite. At last, utilizing the compound structure of f , we can relate it to the encoder φ. Specifically, givenLemma B.1. The logarithm of the pointwise covering number of the function class. Next, we observe thatWe complete the proof by combining the above results. 2019)). Consider a function class G : X → R and n samples {(x i , y i )} n i=1 drawn from D, where x i ∈ X and y i ∈ [0, 1]. The Bayes optimal function is g * = arg min g∈G E (x,y)∼D (g(x) -y) 2 and the empirical risk minimizer is ĝ = arg min g∈GLemma B.2. Given the empirical risk minimizer f in Algorithm 1, we havewith probability at least 1 -δ.Proof. This directly follows from the combination of Lemma B.1 and Proposition B.1 by letting = 1 n .Proof of Theorem 4.1. First, we denoteto be the prior probability over the i-th abstraction. Then, for all x ∈ X , we haveThe first step is obtained using Equation ( 6). The second step is the triangle inequality. The third step uses the fact that under event E i , we have φ(x 1 ) = φ(x 2 ) and therefore f (x 1 , x ) = f (x 2 , x ).In the fourth step, we drop the dependence on the other variable, and marginalize over D coup . The last step is the Cauchy-Schwarz inequality.At last, we complete the proof by summing over all the i ∈ [N ] and using the fact

C IMPLEMENTATION DETAILS C.1 IMPLEMENTATION FOR ATARI GAMES

Codebase. For Atari games, we use the codebase from https://github.com/Kaixhin/ Rainbow which is an implementation for data-efficient Rainbow (van Hasselt et al., 2019) .Network architecture. To implement our algorithm, we modify the architecture for the value network as shown in Figure 1 Left. In data-efficient Rainbow, the state embedding has a dimension of 576. We maintain an action embedding for each action, which is a vector of the same dimension and treated as trainable parameters. Then, we generate the state-action embedding by conducting an element-wise product to the state embedding and the action embedding. This state-action embedding is shared with the auxiliary task and the main RL task. Afterwards, the value network outputs the return distribution for this state-action pair (noting that Rainbow uses a distributional RL algorithm, C51 (Bellemare et al., 2017) ) instead of the return distributions of all actions for the input state as is in the original implementation.Hyperparameters. We use exactly the same hyperparameters as those used in van Hasselt et al. (2019) and CURL (Srinivas et al., 2020) to quantify the gain brought by our auxiliary task and compare with CURL. We refer the readers to their paper for the detailed list of the hyperparameters.Balancing the auxiliary loss and the main RL loss. Unlike CURL (or other previous work such as Jaderberg et al. (2016) ; Yarats et al. (2019) ) that uses different/learned coefficients/learning rates for different games to balance the auxiliary task and the RL updates, our algorithm uses equal weight and learning rate for both the auxiliary task and the main RL task. This demonstrates the our auxiliary task is robust and does not need careful tuning for these hyperparameters compared with the previous work.Auxiliary loss. Since the rewards in Atari games are sparse, we divide the segments such that all the state-action pairs within the same segment have the same return. This corresponds to the setting of Z-learning with K → ∞ where the positive sample has exactly the same return with that of the anchor. Then, the auxiliary loss for each update is calculated as follows: First, we sample a batch of 64 anchor state-action pairs from the prioritized replay memory. Then, for each state-action pair, we sample the corresponding positive pair (i.e., the state-action pair within the same segment as the anchor state-action pair) and the corresponding negative pair (randomly selected from the replay memory). The auxiliary loss is calculated on these samples with effectively |D| = 128.

C.2 IMPLEMENTATION FOR DMCONTROL SUITE

Codebase. We use SAC as the base RL algorithm and build our algorithm on top of the publicly released implementation from CURL (Srinivas et al., 2020) .Network architecture. Similarly, we modify the architecture for the critic network in SAC. In SAC, the state embedding has a dimension of 50. Since the actions are continuous vectors of dimension d in the continuous control tasks of DMControl suite, we directly concatenate the action to the state embedding, resulting in a state-action embedding of size 50 + d. Then, the critic network receives the state-action embedding as the input and outputs the Q value. The actor network receives the state embedding as the input and output the action selection distribution on the corresponding state. Note that, although our auxiliary loss is based on the state-action embedding, the state embedding used by the actor network is also trained by the auxiliary loss through back-propagation of the gradients.Hyperparameters. We set the threshold for dividing the segments to 1.0, i.e., when appending transitions to the replay buffer, we start a new segment when the cumulative reward within the last segment exceeds this threshold. The auxiliary loss and the hyperparameters to balance the auxiliary loss and the main RL loss are the same as those used for Atari games. Other hyperparameters we use are exactly the same as those in CURL implementation and we refer the readers to their paper for the details.

D.1 MORE SEEDS RESULTS OF REPRESENTATION ANALYSIS

For clearness, we only show the result of representation analysis with a single seed in the main text. We add the results for multiple seeds here. The detailed description of analysis task can be found in the first paragraph in Section 5.2.

D.2 HIGH DATA REGIME RESULTS

To empirically study how applicable our model is to higher data regimes, we run the experiments on the first five Atari games (of Table 1 ) for 1.5 millon agent interactions. We show the evaluation results of both our algorithm and the rainbow baseline in Table 2 . We can see that RCRL outperforms the ERainbow-sa baseline for 4 out of 5 games, which may imply that our auxiliary task has the potential to improve performance in the high-data regime. 

