EXTREME Q-LEARNING: MAXENT RL WITHOUT EN-TROPY

Abstract

Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to introduce an objective that directly estimates the optimal soft-value functions (LogSumExp) in the maximum entropy RL setting without needing to sample from a policy. Using EVT, we derive our Extreme Q-Learning framework and consequently online and, for the first time, offline MaxEnt Q-learning algorithms, that do not explicitly require access to a policy or its entropy. Our method obtains consistently strong performance in the D4RL benchmark, outperforming prior works by 10+ points on the challenging Franka Kitchen tasks while offering moderate improvements over SAC and TD3 on online DM Control tasks. Visualizations and code can be found on our website 1 .

1. INTRODUCTION

Modern Deep Reinforcement Learning (RL) algorithms have shown broad success in challenging control (Haarnoja et al., 2018; Schulman et al., 2015) and game-playing domains (Mnih et al., 2013) . While tabular Q-iteration or value-iteration methods are well understood, state of the art RL algorithms often make theoretical compromises in order to deal with deep networks, high dimensional state spaces, and continuous action spaces. In particular, standard Q-learning algorithms require computing the max or soft-max over the Q-function in order to fit the Bellman equations. Yet, almost all current off-policy RL algorithms for continuous control only indirectly estimate the Q-value of the next state with separate policy networks. Consequently, these methods only estimate the Q-function of the current policy, instead of the optimal Q * , and rely on policy improvement via an actor. Moreover, actor-critic approaches on their own have shown to be catastrophic in the offline settings where actions sampled from a policy are consistently out-of-distribution (Kumar et al., 2020; Fujimoto et al., 2018) . As such, computing max Q for Bellman targets remains a core issue in deep RL. One popular approach is to train Maximum Entropy (MaxEnt) policies, in hopes that they are more robust to modeling and estimation errors (Ziebart, 2010) . However, the Bellman backup B * used in MaxEnt RL algorithms still requires computing the log-partition function over Q-values, which is usually intractable in high-dimensional action spaces. Instead, current methods like SAC (Haarnoja et al., 2018) rely on auxiliary policy networks, and as a result do not estimate B * , the optimal Bellman backup. Our key insight is to apply extreme value analysis used in branches of Finance and Economics to Reinforcement Learning. Ultimately, this will allow us to directly model the LogSumExp over Q-functions in the MaxEnt Framework. Intuitively, reward or utility-seeking agents will consider the maximum of the set of possible future returns. The Extreme Value Theorem (EVT) tells us that maximal values drawn from any exponential tailed distribution follows the Generalized Extreme Value (GEV) Type-1 distribution, also referred to as the Gumbel Distribution G. The Gumbel distribution is thus a prime candidate for modeling errors in Q-functions. In fact, McFadden's 2000 Nobel-prize winning work in Economics on discrete choice models (McFadden, 1972) showed that soft-optimal utility functions with logit (or softmax) choice probabilities naturally arise when utilities are assumed to have Gumbel-distributed errors. This was subsequently generalized to stochastic MDPs by Rust (1986) . Nevertheless, these results have remained largely unknown in the RL community. By introducing a novel loss optimization framework, we bring them into the world of modern deep RL. Empirically, we find that even modern deep RL approaches, for which errors are typically assumed to be Gaussian, exhibit errors that better approximate the Gumbel Distribution, see Figure 1 . By assuming errors to be Gumbel distributed, we obtain Gumbel Regression, a consistent estimator over log-partition functions even in continuous spaces. Furthermore, making this assumption about Qvalues lets us derive a new Bellman loss objective that directly solves for the optimal MaxEnt Bellman operator B * , instead of the operator under the current policy B π . As soft optimality emerges from our framework, we can run MaxEnt RL independently of the policy. In the online setting, we avoid using a policy network to explicitly compute entropies. In the offline setting, we completely avoid sampling from learned policy networks, minimizing the aforementioned extrapolation error. Our resulting algorithms surpass or consistently match state-of-the-art (SOTA) methods while being practically simpler. In this paper we outline the theoretical motivation for using Gumbel distributions in reinforcement learning, and show how it can be used to derive practical online and offline MaxEnt RL algorithms. Concretely, our contributions are as follows: • We motivate Gumbel Regression and show it allows calculation of the log-partition function (LogSumExp) in continuous spaces. We apply it to MDPs to present a novel loss objective for RL using maximum-likelihood estimation. • Our formulation extends soft-Q learning to offline RL as well as continuous action spaces without the need of policy entropies. It allows us to compute optimal soft-values V * and soft-Bellman updates B * using SGD, which are usually intractable in continuous settings. • We provide the missing theoretical link between soft and conservative Q-learning, showing how these formulations can be made equivalent. We also show how Max-Ent RL emerges naturally from vanilla RL as a conservatism in our framework. • Finally, we empirically demonstrate strong results in Offline RL, improving over prior methods by a large margin on the D4RL Franka Kitchen tasks, and performing moderately better than SAC and TD3 in Online RL, while theoretically avoiding actor-critic formulations.

2. PRELIMINARIES

In this section we introduce Maximium Entropy (MaxEnt) RL and Extreme Value Theory (EVT), which we use to motivate our framework to estimate extremal values in RL. We consider an infinite-horizon Markov decision process (MDP), defined by the tuple (S, A, P, r, γ), where S, A represent state and action spaces, P(s ′ |s, a) represents the environment dynamics, r(s, a) represents the reward function, and γ ∈ (0, 1) represents the discount factor. In the offline RL setting, we are given a dataset D = (s, a, r, s ′ ) of tuples sampled from trajectories under a behavior policy π D without any additional environment interactions. We use ρ π (s) to denote the distribution of states that a policy π(a|s) generates. In the MaxEnt framework, an MDP with entropy-regularization is referred to as a soft-MDP (Bloem & Bambos, 2014) and we often use this notation.

2.1. MAXIMUM ENTROPY RL

Standard RL seeks to learn a policy that maximizes the expected sum of (discounted) rewards E π [ ∞ t=0 γ t r(s t , a t )], for (s t , a t ) drawn at timestep t from the trajectory distribution that π generates. We consider a generalized version of Maximum Entropy RL that augments the standard reward objective with the KL-divergence between the policy and a reference distribution µ: E π [ ∞ t=0 γ t (r(s t , a t ) -β log π(at|st) µ(at|st) )], where β is the regularization strength. When µ is uniform U, this becomes the standard MaxEnt objective used in online RL up to a constant. In the offline RL setting, we choose µ to be the behavior policy π D that generated the fixed dataset D. Consequently, this objective enforces a conservative KL-constraint on the learned policy, keeping it close to the behavior policy (Neu et al., 2017; Haarnoja et al., 2018) . In MaxEnt RL, the soft-Bellman operator B * : R S×A → R S×A is defined as (B * Q)(s, a) = r(s, a)+ γE s ′ ∼P(•|s,a) V * (s ′ ) where Q is the soft-Q function and V * is the optimal soft-value satisfying: V * (s) = β log a µ(a|s) exp (Q(s, a)/β) := L β a∼µ(•|s) [Q(s, a)] , where we denote the log-sum-exp (LSE) using an operator L β for succinctness 2 . The soft-Bellman operator has a unique contraction Q * (Haarnoja et al., 2018) given by the soft-Bellman equation: Q * = B * Q * and the optimal policy satisfies (Haarnoja et al., 2017) : π * (a|s) = µ(a|s) exp ((Q * (s, a) -V * (s))/β). Instead of estimating soft-values for a policy V π (s) = E a∼π(•|s) Q(s, a) -β log π(a|s) µ(a|s) , our approach will seek to directly fit the optimal soft-values V * , i.e. the log-sum-exp (LSE) of Q values.

2.2. EXTREME VALUE THEOREM

The Fisher-Tippett or Extreme Value Theorem tells us that the maximum of i.i.d. samples from exponentially tailed distributions will asymptotically converge to the Gumbel distribution G(µ, β), which has PDF p(x) = exp(-(z + e -z )) where z = (x -µ)/β with location parameter µ and scale parameter β. Theorem 1 (Extreme Value Theorem (EVT) (Mood, 1950; Fisher & Tippett, 1928)  ). For i.i.d. random variables X 1 , ..., X n ∼ f X , with exponential tails, lim n→∞ max i (X i ) follows the Gumbel (GEV-1) distribution. Furthermore, G is max-stable, i.e. if X i ∼ G, then max i (X i ) ∼ G holds. This result is similar to the Central Limit Theorem (CLT), which states that means of i.i.d. errors approach the normal distribution. Thus, under a chain of max operations, any i.i.d. exponential tailed errors 3 will tend to become Gumbel distributed and stay as such. EVT will ultimately suggest us to characterize nested errors in Q-learning as following a Gumbel distribution. In particular, the Gumbel distribution G exhibits unique properties we will exploit. One intriguing consequence of the Gumbel's max-stability is its ability to convert the maximum over a discrete set into a softmax. This is known as the Gumbel-Max Trick (Papandreou & Yuille, 2010; Hazan & Jaakkola, 2012). Concretely for i.i.d. ϵ i ∼ G(0, β) added to a set {x 1 , ..., x n } ∈ R, max i (x i + ϵ i ) ∼ G(β log i exp (x i /β), β), and argmax(x i + ϵ i ) ∼ softmax(x i /β). Furthermore, the Max-trick is unique to the Gumbel (Luce, 1977) . These properties lead into the McFadden-Rust model (McFadden, 1972; Rust, 1986) of MDPs as we state below. McFadden-Rust model: An MDP following the standard Bellman equations with stochasticity in the rewards due to unobserved state variables will satisfy the soft-Bellman equations over the observed state with actual rewards r(s, a), given two conditions: 1. Additive separability (AS): observed rewards have additive i.i.d. Gumbel noise, i.e. r(s, a) = r(s, a) + ϵ(s, a), with actual rewards r(s, a) and i.i.d. noise ϵ(s, a) ∼ G(0, β). 2. Conditional Independence (CI): the noise ϵ(s, a) in a given state-action pair is conditionally independent of that in any other state-action pair. Moreover, the converse also holds: Any MDP satisfying the Bellman equations and following a softmax policy, necessarily has any i.i.d. noise in the rewards with AS + CI conditions be Gumbel distributed. These results were first shown to hold in discrete choice theory by McFadden (1972) , with the AS + CI conditions derived by Rust (1986) for discrete MDPs. We formalize these results in Appendix A and give succinct proofs using the developed properties of the Gumbel distribution. These results enable the view of a soft-MDP as an MDP with hidden i.i.d. Gumbel noise in the rewards. Notably, this result gives a different interpretation of a soft-MDP than entropy regularization to allow us to recover the soft-Bellman equations.

3. EXTREME Q-LEARNING

In this section, we motivate our Extreme Q-learning framework, which directly models the softoptimal values V * , and show it naturally extends soft-Q learning. Notably, we use the Gumbel distribution to derive a new optimization framework for RL via maximum-likelihood estimation and apply it to both online and offline settings. Although assuming Gumbel errors in MDPs leads to intriguing properties, it is not obvious why the errors might be distributed as such. First, we empirically investigate the distribution of Bellman errors by computing them over the course of training. Specifically, we compute r(s, a) -γQ(s ′ , π(s ′ )) -Q(s, a) for samples (s, a, s ′ ) from the replay-buffer using a single Q-function from SAC (Haarnoja et al., 2018 ) (See Appendix D for more details). In Figure 1 , we find the errors to be skewed and better fit by a Gumbel distribution. We explain this using EVT. Consider fitting Q-functions by learning an unbiased function approximator Q to solve the Bellman equation. We will assume access to M such function approximators, each of which are assumed to be independent e.g. parallel runs of a model over an experiment. We can see approximate Q-iteration as performing: Qt (s, a) = Qt (s, a) + ϵ t (s, a), where E[ Q] = Qt is the expected value of our prediction Qt for an intended target Qt over our estimators, and ϵ t is the (zero-centered) error in our estimate. Here, we assume the error ϵ t comes from the same underlying distribution for each of our estimators, and thus are i.i.d. random variables with a zero-mean. Now, consider the bootstrapped estimate using one of our M estimators chosen randomly: B * Qt (s, a) = r(s, a) + γ max a ′ Qt (s ′ , a ′ ) = r(s, a) + γ max a ′ ( Qt (s ′ , a ′ ) + ϵ t (s ′ , a ′ )). We now examine what happens after a subsequent update. At time t + 1, suppose that we fit a fresh set of M independent functional approximators Qt+1 with the target B * Qt , introducing a new unbiased error ϵ t+1 . Then, for Qt+1 = E[ Qt+1 ] it holds that Qt+1 (s, a) = r(s, a) + γE s ′ |s,a [E ϵt [max a ′ ( Qt (s ′ , a ′ ) + ϵ t (s ′ , a ′ ))]]. As Qt+1 is an expectation over both the dynamics and the functional errors, it accounts for all uncertainty (here E[ϵ t+1 ] = 0). But, the i.i.d. error ϵ t remains and will be propagated through the Bellman equations and its chain of max operations. Due to Theorem 1, ϵ t will become Gumbel distributed in the limit of t, and remain so due to the Gumbel distribution's max-stability. 4This highlights a fundamental issue with approximation-based RL algorithms that minimize the Mean-Squared Error (MSE) in the Bellman Equation: they implicitly assume, via maximum likelihood estimation, that errors are Gaussian. In Appendix A, we further study the propagation of errors using the McFadden-Rust MDP model, and use it to develop a simplified Gumbel Error Model (GEM) for errors under functional approximation. In practice, the Gumbel nature of the errors may be weakened as estimators between timesteps share parameters and errors will be correlated across states and actions.

3.2. GUMBEL REGRESSION

The goal of our work is to directly model the log-partition function (LogSumExp) over Q(s, a) to avoid all of the aforementioned issues with taking a max in the function approximation domain. In this section we derive an objective function that models the LogSumExp by simply assuming errors follow a Gumbel distribution. Consider estimating a parameter h for a random variable X using samples x i from a dataset D, which have Gumbel distributed noise, i.e. x i = h + ϵ i where ϵ i ∼ -G(0, β). Then, the average log-likelihood of the dataset D as a function of h is given as: E xi∼D [log p(x i )] = E xi∼D -e ((xi-h)/β) + (x i -h)/β Maximizing the log-likelihood yields the following convex minimization objective in h, L(h) = E xi∼D e (xi-h)/β -(x i -h)/β -1 which forms our objective function L(•), which resembles the Linex loss from econometrics (Parsian & Kirmani, 2002) 5 . β is fixed as a hyper-parameter, and we show its affect on the loss in Figure 2 . Critically, the minima of this objective under a fixed β is given by h = β log E xi∼D [e xi/β ], which resembles the LogSumExp with the summation replaced with an (empirical) expectation. In fact, this solution is the the same as the operator L β µ (X) defined for MaxEnt in Section 2.1 with x i sampled from µ. In Figure 2 , we show plots of Gumbel Regression on a simple dataset with different values of β. As this objective recovers L β (X), we next use it to model soft-values in Max-Ent RL.

3.2.1. THEORY

Here we show that Gumbel regression is well behaved, considering the previously defined operator L β for random variables L β (X) := β log E e X/β . First, we show it models the extremum. Lemma 3.1. For any β 1 > β 2 , we have L β1 (X) < L β2 (X). And L ∞ (X) = E [X], L 0 (X) = sup(X). Thus, for any β ∈ (0, ∞), the operator L β (X) is a measure that interpolates between the expectation and the max of X. The operator L β (X) is known as the cumulant-generating function or the log-Laplace transform, and is a measure of the tail-risk closely linked to the entropic value at risk (EVaR) (Ahmadi-Javid, 2012) . Lemma 3.2. The risk measure L has a unique minima at β log E e X/β . And an empirical risk L is an unbiased estimate of the true risk. Furthermore, for β ≫ 1, L(θ) ≈ 1 2β 2 E xi∼D [(x i -θ) 2 ] , thus behaving as the MSE loss with errors ∼ N (0, β). In particular, the empirical loss L over a dataset of N samples can be minimized using stochastic gradient-descent (SGD) methods to give an unbiased estimate of the LogSumExp over the N samples. Lemma 3.3. Lβ (X) over a finite N samples is a consistent estimator of the log-partition function L β (X). Similarly, exp( Lβ (X)/β) is an unbiased estimator for the partition function Z = E e X/β We provide PAC learning bounds for Lemma 3.3, and further theoretical discussion on Gumbel Regression in Appendix B.

3.3. MAXENT RL WITHOUT ENTROPY

Given Gumbel Regression can be used to directly model the LogSumExp , we apply it to Q-learning. First, we connect our framework to conservative Q-learning (Kumar et al., 2020) . Lemma 3.4. Consider the loss objective over Q-functions: L(Q) = E s∼ρµ,a∼µ(•|s) e (T π Qk (s,a)-Q(s,a))/β -E s∼ρµ,a∼µ(•|s) [(T π Qk (s, a) -Q(s, a))/β] -1 (8) where T π := r(s, a) + γE s ′ |s,a E a ′ ∼π [Q(s ′ , a ′ )] is the vanilla Bellman operator under the policy π(a|s). Then minimizing L gives the update rule: ∀s, a, k Qk+1 (s, a) = T π Qk (s, a) -β log π(a | s) µ(a | s) = B π Qk (s, a). The above lemma transforms the regular Bellman backup into the soft-Bellman backup without the need for entropies, letting us convert standard RL into MaxEnt RL. Here, L(•) does a conservative Q-update similar to CQL (Kumar et al., 2020) with the nice property that the implied conservative term is just the KL-constraint between π and µ. 6 This enforces a entropy-regularization on our policy with respect to the behavior policy without the need of entropy. Thus, soft-Q learning naturally emerges as a conservative update on regular Q-learning under our objective. Here, Equation 8is the dual of the KL-divergence between µ and π (Garg et al., 2021), and we motivate this objective for RL and establish formal equivalence with conservative Q-learning in Appendix C. In our framework, we use the MaxEnt Bellman operator B * which gives our ExtremeQ loss, which is the same as our Gumbel loss from the previous section: L(Q) = E s,a∼µ e ( B * Qk (s,a)-Q(s,a))/β -E s,a∼µ [( B * Qk (s, a) -Q(s, a))/β] -1 This gives an update rule: Qk+1 (s, a) = B * Qk (s, a). L(•) here requires estimation of B * which is very hard in continuous action spaces. Under deterministic dynamics, L can be obtained without B * as shown in Appendix C. However, in general we still need to estimate B * . Next, we motivate how we can solve this issue. Consider the soft-Bellman equation from Section 2.1 (Equation 1), B * Q = r(s, a) + γE s ′ ∼P (•|s,a) [V * (s ′ )], where V * (s) = L β a∼µ(•|s ′ ) [Q(s, a)]. Then V * can be directly estimated using Gumbel regression by setting the temperature β to the regularization strength in the MaxEnt framework. This gives us the following ExtremeV loss objective: J (V ) = E s,a∼µ e ( Qk (s,a)-V (s))/β -E s,a∼µ [( Qk (s, a) -V (s))/β] -1. (11) Lemma 3.5. Minimizing J over values gives the update rule: V k (s) = L β a∼µ(•|s) [ Qk (s, a)]. Then we can obtain V * from Q(s, a) using Gumbel regression and substitute in Equation 10 to estimate the optimal bellman backup B * Q. Thus, Lemma 3.4 and 3.5 give us a scheme to solve the Max-Ent RL problem without the need of entropy.

3.4. LEARNING POLICIES

In the above section we derived a Q-learning strategy that does not require explicit use of a policy π. However, in continuous settings we still often want to recover a policy that can be run in the environment. Per Eq. 2 (Section 2.2), the optimal MaxEnt policy π * (a|s) = µ(a|s)e (Q(s,a)-V (s))/β . By minimizing the forward KL-divergence between π and the optimal π * induced by Q and V we obtain the following training objective: π * = argmax π E ρµ(s,a) [e (Q(s,a)-V (s))/β log π]. If we take ρ µ to be a dataset D generated from a behavior policy π D , we exactly recover the AWR objective used by prior works in Offline RL (Peng et al., 2019; Nair et al., 2020) , which can easily be computed using the offline dataset. This objective does not require sampling actions, which may potentially take Q(s, a) out of distribution. Alternatively, if we want to sample from the policy instead of the reference distribution µ, we can minimize the Reverse-KL divergence which gives us the SAC-like actor update: π * = argmax π E ρπ(s)π(a|s) [Q(s, a) -β log(π(a|s)/µ(a|s))]. Interestingly, we note this doesn't depend on V (s). If µ is chosen to be the last policy π k , the second term becomes the KL-divergence between the current policy and π k , performing a trust region update on π (Schulman et al., 2015; Vieillard et al., 2020). 7 While estimating the log ratio log(π(a|s)/µ(a|s)) can be difficult depending on choice of µ, our Gumbel Loss J removes the need for µ during Q learning by estimating soft-Q values of the form Q(s, a) -β log(π(a|s)/µ(a|s)).

3.5. PRACTICAL ALGORITHMS

Algorithm 1 Extreme Q-learning (X -QL) (Under Stochastic Dynamics) Train Q ϕ using L(ϕ) from Eq. 14 5: 1: Init Q ϕ , V θ , Train V θ using J (θ) from Eq. 11 (with a ∼ D (offline) or a ∼ π ψ (online)) 6: Update π ψ via Eq. 12 (offline) or Eq. 13 (online) 7: end for In this section we develop a practical approach to Extreme Q-learning (X -QL) for both online and offline RL. We consider parameterized functions V θ (s), Q ϕ (s, a), and π ψ (a|s) and let D be the training data distribution. A core issue with directly optimizing Eq. 10 is over-optimism about dynamics (Levine, 2018) when using simple-sample estimates for the Bellman backup. To overcome this issue in stochastic settings, we separate out the optimization of V θ from that of Q ϕ following Section 3.3. We learn V θ using Eq. 11 to directly fit the optimal soft-values V * (s) based on Gumbel regression. Using V θ (s ′ ) we can get single-sample estimates of B * as r(s, a) + γV θ (s ′ ). Now we can learn an unbiased expectation over the dynamics, Q ϕ ≈ E s ′ |s,a [r(s, a) + γV θ (s ′ )] by minimizing the Mean-squared-error (MSE) loss between the single-sample targets and Q ϕ : L(ϕ) = E (s,a,s ′ )∼D (Q ϕ (s, a) -r(s, a) -γV θ (s ′ )) 2 . ( ) In deterministic dynamics, our approach is largely simplified and we directly learn a single Q ϕ using Eq. 9 without needing to learn B * or V * . Similarly, we learn soft-optimal policies using Eq. 12 (offline) or Eq. 13 (online) settings. Offline RL. In the offline setting, D is specified as an offline dataset assumed to be collected with the behavior policy π D . Here, learning values with Eq. 11 has a number of practical benefits. First, we are able to fit the optimal soft-values V * without sampling from a policy network, which has been shown to cause large out-of-distribution errors in the offline setting where mistakes cannot be corrected by collecting additional data. Second, we inherently enforce a KL-constraint on the optimal policy π * and the behavior policy π D . This provides tunable conservatism via the temperature β. After offline training of Q ϕ and V θ , we can recover the policy post-training using the AWR objective (Eq. 12). Our practical implementation follows the training style of Kostrikov et al. ( 2021), but we train value network using using our ExtremeQ loss. Online RL. In the online setting, D is usually given as a replay buffer of previously sampled states and actions. In practice, however, obtaining a good estimate of V * (s ′ ) requires that we sample actions with high Q-values instead of uniform sampling from D. As online learning allows agents to correct over-optimism by collecting additional data, we use a previous version of the policy network π ψ to sample actions for the Bellman backup, amounting to the trust-region policy updates detailed at the end of Section 3.4. In practice, we modify SAC and TD3 with our formulation. To embue SAC (Haarnoja et al., 2018) with the benefits of Extreme Q-learning, we simply train V θ using Eq. 11 with s ∼ D, a ∼ π ψ k (a|s). This means that we do not use action probabilities when updating the value networks, unlike other MaxEnt RL approaches. The policy is learned via the objective max ψ E[Q ϕ (s, π ψ (s))] with added entropy regularization, as SAC does not use a fixed noise schedule. TD3 by default does not use a value network, and thus we use our algorithm for deterministic dynamics by changing the loss to train Q in TD3 to directly follow Eq. 9. The policy is learned as in SAC, except without entropy regularization as TD3 uses a fixed noise schedule.

4. EXPERIMENTS

We compare our Extreme Q-Learning (X -QL) approach to state-of-the-art algorithms across a wide set of continuous control tasks in both online and offline settings. In practice, the exponential nature of the Gumbel regression poses difficult optimization challenges. We provide Offline results on Androit, details of loss implementation, ablations, and hyperparameters in Appendix D. 1 . We find performance on the Gym locomotion tasks to be already largely saturated without introducing ensembles An et al. ( 2021), but our method achieves consistently high performance across environments. While we attain good performance using fixed hyper-parameters per domain, X -QL achieves even higher absolute performance and faster convergence than IQL's reported results when hyper-parameters are turned per environment. With additional tuning, we also see particularly large improvements on the AntMaze tasks, which require a significant amount of "stitching" between trajectories (Kostrikov et al., 2021) . Full learning curves are in the Appendix. Like IQL, X -QL can be easily fine-tuned using online data to attain even higher performance as shown in Table 2 . 2018) is Double Q-Learning, which takes the minimum of two Q functions to remove overestimate bias in the Q-target. As we assume errors to be Gumbel distributed, we expect our X -variants to be more robust to such errors. In all environments except Cheetah Run, our X -TD3 without the Double-Q trick, denoted X -QL -DQ, performs better than standard TD3. While the gains from Extreme-Q learning are modest in online settings, none of our methods require access to the policy distribution to learn the Q-values. 

5. RELATED WORK

Our approach builds on works online and offline RL. Here we review the most salient ones. Inspiration for our framework comes from econometrics (Rust, 1986; McFadden, 1972) 

6. CONCLUSION

We propose Extreme Q-Learning, a new framework for MaxEnt RL that directly estimates the optimal Bellman backup B * without relying on explicit access to a policy. Theoretically, we bridge the gap between the regular, soft, and conservative Q-learning formulations. Empirically, we show that our framework can be used to develop simple, performant RL algorithms. A number of future directions remain such as improving stability with training with the exponential Gumbel Loss function and integrating automatic tuning methods for temperature β like SAC (Haarnoja et al., 2018) . Finally, we hope that our framework can find general use in Machine Learning for estimating log-partition functions.

A THE GUMBEL ERROR MODEL FOR MDPS

In this section, we functionally analyze Q-learning using our framework and further develop the Gumbel Error Model (GEM) for MDPs.

A.1 RUST-MCFADDEN MODEL OF MDPS

For an MDP following the Bellman equations, we assume the observed rewards to be stochastic due to an unobserved component of the state. Let s be the observed state, and (s, z) be the actual state with hidden component z. Then, Q(s, z, a) = R(s, z, a) + γE s ′ ∼P (•|s,a) [E z ′ |s ′ [V (s ′ , z ′ )], V (s, z) = max a Q(s, z, a). ( ) Lemma A.1. Given, 1) conditional independence (CI) assumption that z ′ depends only on s ′ , i.e. p(s ′ , z ′ |s, z, a) = p(z ′ |s ′ )p(s ′ |s, a) and 2) additive separablity (AS) assumption on the hidden noise: R(s, a, z) = r(s, a) + ϵ(z, a). Then for i.i.d. ϵ(z, a) ∼ G(0, β), we recover the soft-Bellman equations for Q(s, z, a) = q(s, a) + ϵ(z, a) and v(s) = E z [V (s, z)], with rewards r(s, a) and entropy regularization β. Hence, a soft-MDP in MaxEntRL is equivalent to an MDP with an extra hidden variable in the state that introduces i.i.d. Gumbel noise in the rewards and follows the AS+CI conditions. Proof. We have, q(s, a) = r(s, a) + γE s ′ ∼P (•|s,a) [E z ′ |s ′ [V (s ′ , z ′ )] (17) v(s) = E z [V (s, z)] = E z [max a (q(s, a) + ϵ(z))]. From this, we can get fixed-point equations for q and π, q(s, a) = r(s, a) + γE s ′ ∼P (•|s,a) [E z ′ |s ′ [max a ′ (q(s ′ , a ′ ) + ϵ(z ′ , a ′ ))]], π(•|s) = E z [argmax a (q(s, a) + ϵ(z, a))] ∈ ∆ A , where ∆ A is the set of all policies. Now, let ϵ(z, a) ∼ G(0, β) and assumed independent for each (z, a) (or equivalently (s, a) due to the CI condition). Then we can use the Gumbel-Max trick to recover the soft-Bellman equations for q(s, a) and v(s) with rewards r(s, a): q(s, a) = r(s, a) + γE s ′ ∼P (•|s,a) [L β a ′ [q(s ′ , a ′ )]], (•|s) = softmax a (q(s, a)). ( ) Thus, we have that the soft-Bellman optimality equation and related optimal policy can arise either from the entropic regularization viewpoint or from the Gumbel error viewpoint for an MDP. Corollary A.1.1. Converse: An MDP following the Bellman optimality equation and having a policy that is softmax distributed, necessarily has any i.i.d. noise in the rewards due to hidden state variables be Gumbel distributed, given the AS+CI conditions hold. Proof. McFadden (McFadden, 1972) proved this converse in his seminal work on discrete choice theory, that for i.i.d. ϵ satisfiying Equation 19 with a choice policy π ∼ softmax has ϵ be Gumbel distributed. And we show a proof here similar to the original for MDPs. Considering Equation 20, we want π(a|s) to be softmax distributed. Let ϵ have an unknown CDF F and we consider there to be N possible actions. Then, P (argmax a (q(s, a) + ϵ(z, a)) = a i |s, z) = P (q(s, a i ) + ϵ(z, a i ) ≥ q(s, a j ) + ϵ(z, a j ) ∀i ̸ = j |s, z) = P (ϵ(z, a j ) -ϵ(z, a i ) ≤ q(s, a i ) -q(s, a j ) ∀i ̸ = j |s, z) Simplifying the notation, we write ϵ(z, a i ) = ϵ i and q(s, a i ) = q i . Then ϵ 1 , ..., ϵ N has a joint CDF G: G(ϵ 1 , ..., ϵ N ) = N j=1 P (ϵ j ≤ ϵ i + q i -q j ) = N j=1 F (ϵ i + q i -q j ) and we can get the required probability π(i) as: π(i) = +∞ ε=-∞ N j=1,j̸ =i F (ε + q i -q j )dF (ε) For π = softmax(q), McFadden (McFadden, 1972) proved the uniqueness of F to be the Gumbel CDF, assuming translation completeness property to hold for F . Later this uniqueness was shown to hold in general for any N ≥ 3 (Luce, 1977) .

A.2 GUMBEL ERROR MODEL (GEM) FOR MDPS

To develop our Gumbel Error Model (GEM) for MDPs under functional approximation as in Section 3.1, we follow our simplified scheme of M independent estimators Q, which results in the following equation over Q = E[ Q]: Qt+1 (s, a) = r(s, a) + γE s ′ |s,a [E ϵt [max a ′ ( Qt (s ′ , a ′ ) + ϵ t (s ′ , a ′ ))]]. Here, the maximum of random variables will generally be greater than the true max, i.e. E ϵ [max a ′ ( Q(s ′ , a ′ ) + ϵ(s ′ , a ′ ))] ≥ max a ′ Q(s ′ , a ′ ) (Thrun & Schwartz, 1999 ). As a result, even initially zero-mean error can cause Q updates to propagate consistent overestimation bias through the Bellman equation. This is a known issue with function approximation in RL (Fujimoto et al., 2018) . Now, we can use the Rust-McFadden model from before. To account for the stochasticity, we consider extra unobserved state variables z in the MDP to be the model parameters θ used in the functional approximation. The errors from functional approximation ϵ t can thus be considered as noise added in the reward. Here, CI condition holds as ϵ is separate from the dynamics and becomes conditionally independent for each state-action pair and AS condition is implied. Then for Q satisfying Equation 24, we can apply the McFadden-Rust model, which implies that for the policy to be soft-optimal i.e. a softmax over Q, ϵ will be Gumbel distributed. Conversely, for the i.i.d. ϵ ∼ G, Q(s, a) follows the soft-Bellman equations and π(a|s) = softmax(Q(s, a)). This indicates an optimality condition on the MDP -for us to eventually attain the optimal softmax policy in the presence of functional boostrapping (Equation 24), the errors should follow the Gumbel distribution.

A.2.1 TIME EVOLUTION OF ERRORS IN MDPS UNDER DETERMINISTIC DYNAMICS

In this section, we characterize the time evolution of errors in an MDP using GEM. We assume deterministic dynamics to simplify our analysis. We suppose that we know the distribution of Q-values at time t and model the evolution of this distribution through the Bellman equations. Let Z t (s, a) be a random variable sampled from the distribution of Q-values at time t, then the following Bellman equation holds: Z t+1 (s, a) = r(s, a) + γ max a ′ Z t (s ′ , a ′ ). Here, Z t+1 (s, a) = max a ′ [r(s, a) + γZ t (s ′ , a ′ )] is a maximal distribution and based on EVT should eventually converge to an extreme value distribution, which we can model as a Gumbel. Concretely, let's assume that we fix Z t (s, a) ∼ G(Q t (s, a), β) for some Q t (s, a) ∈ R and β > 0. Furthermore, we assume that the Q-value distribution is jointly independent over different stateactions i.e. Z(s, a) is independent from Z(s ′ , a ′ ) for ∀ (s, a) ̸ = (s ′ , a ′ ). Then max a ′ Z t (s ′ , a ′ ) ∼ G(V (s ′ ), β) with V (s) = L β a [Q(s, a)] using the Gumbel-max trick. Then substituting in Equation 25 and rescaling Z t with γ, we get: Z t+1 (s, a) ∼ G r(s, a) + γL β a ′ [Q(s ′ , a ′ )], γβ . So very interestingly the Q-distribution becomes a Gumbel process, where the location parameter Q(s, a) follows the optimal soft-Bellman equation. Similarly, the temperature scales as γβ and the distribution becomes sharper after every timestep. After a number of timesteps, we see that Z(s, a) eventually collapses to the Delta distibution over the unique contraction Q * (s, a). Here, γ controls the rate of decay of the Gumbel distribution into the collapsed Delta distribution. Thus we get the expected result in deterministic dynamics that the optimal Q-function will be deterministic and its distribution will be peaked. So if a Gumbel error enters into the MDP through a functional error or some other source at a timestep t in some state s, it will trigger off an wave that propagates the Gumbel error into its child states following Equation 26. Thus, this Gumbel error process will decay at a γ rate every timestep and eventually settle down with Q-values reaching the the steady solution Q * . The variance of this Gumbel process given as π 2 6 β 2 will decay as γ 2 , similarly the bias will decay as γ-contraction in the L ∞ norm. Hence, GEM gives us an analytic characterization of error propogation in MDPs under deterministic dynamics. Nevertheless under stochastic dynamics, characterization of errors using GEM becomes non-trivial as Gumbel is not mean-stable unlike the Gaussian distribution. We hypothesise that the errors will follow some mix of Gumbel-Gaussian distributions, and leave this characterization as a future open direction.

B GUMBEL REGRESSION

We characterize the concentration bounds for Gumbel Regression in this section. First, we bound the bias on applying L β to inputs containing errors. Second, we bound the PAC learning error due to an empirical Lβ over finite N samples.

B.1 OVERESTIMATION BIAS

Let Q(s, a) be a random variable representing a Q-value estimate for a state and action pair (s, a). We assume that it is an unbiased estimate of the true Q -value Q(s, a) with E[ Q(s, a)] = Q(s, a). Let Q(s, a) ∈ [-Q max , Q max ] Then, V (s) = L β a∼µ Q(s, a) is the true value function, and V (s) = L β a∼µ Q(s, a) is its estimate. Lemma B.1. We have V (s) ≤ E[ V (s)] ≤ E a∼µ [Q(s, a)] + β log cosh(Q max /β). Proof. The lower bound V (s) ≤ E[ V (s)] is easy to show using Jensen's Inequality as log_sum_exp is a convex function. For the upper bound, we can use a reverse Jensen's inequality (Simić, 2009 ) that for any convex mapping f on the interval [a, b] it holds that: i p i f (x i ) ≤ f i p i x i + f (a) + f (b) -f a + b Setting f = -log(•) and x i = e Q(s,a)/β , we get: E a∼µ [-log(e Q(s,a)/β )] ≤ -log(E a∼µ [e Q(s,a)/β ])-log(e Qmax/β )-log(e -Qmax/β )+log e Qmax/β + e -Qmax/β 2 On simplifying, V (s) = β log(E a∼µ e Q(s,a)/β ) ≤ E a∼µ [ Q(s, a)] + β log cosh(Q max /β) Taking expectations on both sides, E[ V (s)] ≤ E a∼µ [Q(s, a)] + β log cosh(Q max /β). This gives an estimate of how much the LogSumExp overestimates compared to taking the expectation over actions for random variables Q. This bias monotonically decreases with β, with β = 0 having a max bias of Q max and for large β decaying as 1 2β Q 2 max . B.2 PAC LEARNING BOUNDS FOR GUMBEL REGRESSION Lemma B.2. exp( Lβ (X)/β) over a finite N samples is an unbiased estimator for the partition function Z β = E e X/β and with a probability at least 1 -δ it holds that: exp( Lβ (X)/β) ≤ Z β + sinh(X max /β) 2 log (1/δ) N . Similarly, Lβ (X) over a finite N samples is a consistent estimator of L β (X) and with a probability at least 1 -δ it holds that: Lβ (X) ≤ L β (X) + β sinh(X max /β) Z β 2 log (1/δ) N . Proof. To prove these concentration bounds, we consider random variables e X1/β , ..., e Xn/β with β > 0, such that a i ≤ X i ≤ b i almost surely, i.e. e ai/β ≤ e Xi/β ≤ e bi/β . We consider the sum S n = N i=1 e Xi/β and use Hoeffding's inequality, so that for all t > 0: P (S n -ES n ≥ t) ≤ exp -2t 2 n i=1 e bi/β -e ai/β 2 To simplify, we let a i = -X max and b i = X max for all i. We also rescale t as t = N s, for s > 0. Then P (S n -ES n ≥ N s) ≤ exp -N s 2 2 sinh 2 (X max /β) We can notice that L.H.S. is same as P (exp( Lβ (X)/β)-exp(L β (X)/β) ≥ s), which is the required probability we want. Letting the R.H.S. have a value δ, we get s = sinh(X max /β) 2 log (1/δ) N Thus, with a probability 1 -δ, it holds that: exp( Lβ (X)/β) ≤ exp(L β (X)/β) + sinh(X max /β) 2 log (1/δ) N Thus, we get a concentration bound on exp( Lβ (X)/β) which is an unbiased estimator of the partition function Z β = exp(L β (X)/β). This bound becomes tighter with increasing β, and asymptotically behaves as Xmax . Similarly, to prove the bound on the log-partition function Lβ (X), we can further take log(•) on both sides and use the inequality log(1 + x) ≤ x, to get a direct concentration bound on Lβ (X), Lβ (X) ≤ L β (X) + β log 1 + sinh(X max /β)e -L β (X)/β 2 log (1/δ) N (30) = L β (X) + β sinh(X max /β)e -L β (X)/β 2 log (1/δ) N (31) = L β (X) + β sinh(X max /β) Z β 2 log (1/δ) N ( ) where T µ is a linear operator that maps Q from current (s, a) to the next (s ′ , a ′ ): T µ Q(s, a) := r(s, a) + γQ(s ′ , a ′ ) Then we have B * Q t = argmin Q ′ ∈Ω J (T µ Q t -Q ′ ), where Ω is the space of Q-functions. Proof. We use that in deterministic dynamics, L γβ a ′ ∼µ [T µ Q(s, a)] = r(s, a) + γL β a ′ ∼µ [Q(s ′ , a ′ )] = B * Q(s, a) Then solving for the unique minima for J establishes the above results. Thus, optimizing J with a fixed-point is equivalent to Q-iteration with the Bellman operator.

C.2 BRIDGING SOFT AND CONSERVATIVE Q-LEARNING

Inherent Convervatism in X -QL Our method is inherently conservative similar to CQL (Kumar et al., 2020) in that it underestimates the value function (in vanilla Q-learning) V π (s) by -β E a∼π(a|s) log π(a|s) π D (a|s) , whereas CQL understimates values by a factor -β E a∼π(a|s) π(a|s) π D (a|s) -1 , where π D is the behavior policy. Notice that the underestimation factor transforms V π in vanilla Q-learning into V π used in the soft-Q learning formulation. Thus, we observe that KL-regularized Q-learning is inherently conservative, and this conservatism is built into our method. Furthermore, it can be noted that CQL conservatism can be derived as adding a χ 2 regularization to an MDP and although not shown by the original work (Kumar et al., 2020) or any follow-ups to our awareness, the last term of Eq. 14 in CQL's Appendix B (Kumar et al., 2020) , is simply χ 2 (π||π D ) and what the original work refers to as D CQL is actually the χ 2 divergence. Thus, it is possible to show that all the results for CQL hold for our method by simply replacing D CQL with D KL i.e. the χ 2 divergence with the KL divergence everywhere. We show a simple proof below that D CQL is the χ 2 divergence: D CQL (π, π D ) (s) := a π(a | s) π(a | s) π D (a | s) -1 = a (π(a | s) -π D (a | s) + π D (a | s)) π(a | s) π D (a | s) -1 = a (π(a | s) -π D (a | s)) π(a | s) -π D (a | s) π D (a | s) + a π D (a | s) π(a | s) π D (a | s) -1 = a π D (a | s) π(a | s) π D (a | s) -1 2 + 0 since, a π(a | s) = a π D (a | s) = 1 = χ 2 (π(• | s) || π D (• | s)) , using the definition of chi-square divergence Why X -QL is better than CQL for offline RL In light of the above results, we know that CQL adds a χ 2 regularization to the policy π with respect to the behavior policy π D , whereas our method does the same using the reverse-KL divergence. Now, the reverse-KL divergence has a mode-seeking behavior, and thus our method will find a policy that better fits the mode of the behavior policy and is more robust to random actions in the offline dataset. CQL does not have such a property and can be easily affected by noisy actions in the dataset. Connection to Dual KL representation For given distributions µ and π, we can write their KL-divergence using the dual representation proposed by IQ-Learn (Garg et al., 2021): D KL (π || µ) = max x∈R E µ [-e -x ] -E π [x] -1, which is maximized for x = -log(π/µ). We can make a clever substitution to exploit the above relationship. Let x = (Q -T π Qk )/β for a variable Q ∈ R and a fixed constant T π Qk , then on variable substitution we get the equation: E s∼ρµ [D KL (π(•|s) || µ(•|s))] = min Q L(Q), with L(Q) = E s∼ρµ,a∼µ(•|s) e (T π Qk (s,a)-Q(s,a))/β -E s∼ρµ,a∼π(•|s) [(T π Qk (s, a) -Q(s, a))/β] - This gives us Equation 8 in Section 3.3 of the main paper, and is minimized for Q = T π Qkβ log(π/µ) as we desire. Thus, this lets us transform the regular Bellman update into the soft-Bellman update.

D EXPERIMENTS

In this section we provide additional results and more details on all experimental procedures.

D.1 A TOY EXAMPLE

Max XQL Loss, Beta = 0.1 XQL Loss, Beta = 0.5 MSE Loss Figure 4 : Here we show the effect of using different ways of fitting the value function on a toy grid world, where the agents goal is to navigate from the beginning of the maze on the bottom left to the end of the maze on the top left. The color of each square shows the learned value. As the environment is discrete, we can investigate how well Gumbel Regression fits the maximum of the Q-values. As seen, when MSE loss is used instead of Gumbel regression, the resulting policy is poor at the beginning and the learned values fail to propagate. As we increase the value of beta, we see that the learned values begin to better approximate the optimal max Q policy shown on the very right. same data preprocessing which is described in their appendix. We additionally take their baseline results and use them in Table 1 , Table 2 , and Table 3 for accurate comparison.

D.2 BELLMAN ERROR PLOTS

We keep our general algorithm hyper-parameters and evaluation procedure the same but tune β and the gradient clipping value for each environment. Tuning values of β was done via hyper-parameter sweeps over a fixed set of values [0.6, 0.8, 1, 2, 5] for offline save for a few environments where larger values were clearly better. Increasing the batch size tended to also help with stability, since our rescaled loss does a per-batch normalization. AWAC parameters were left identical to those in IQL. For MuJoCo locomotion tasks we average mean returns over 10 evaluation trajectories and 6 random seeds. For the AntMaze tasks, we average over 1000 evaluation trajectories. We don't see stability issues in the mujoco locomotion environments, but found that offline runs for the AntMaze environments could occasionally exhibit divergence in training for a small β < 1. In order to help mitigate this, we found adding Layer Normalization (Ba et al., 2016) to the Value networks to work well. Full hyper-parameters we used for experiments are given in Table 4 .

D.5 OFFLINE ABLATIONS

In this section we show hyper-parameter ablations for the offline experiments. In particular, we ablate the temperature parameter, β, and the batch size. The temperature β controls the strength of KL penalization between the learned policy and the dataset behavior policy, and a small β is beneficial for datasets with lots of random noisy actions, whereas a high β favors more expert-like datasets. Because our implementation of the Gumbel regression loss normalizes gradients at the batch level, larger batches tended to be more stable and in some environments lead to higher final performance. To show that our tuned X -QL method is not simply better than IQL due to bigger batch we show a comparison with a fixed batch size of 1024 in Fig. 7 .

D.6 ONLINE EXPERIMENTS

We base our implementation of SAC off pytorch_sac (Yarats & Kostrikov, 2020) but modify it to use a Value function as described in Haarnoja et al. (2017) . Empirically we see similar performance with and without using the value function, but leave it in for fair comparison against our X -SAC variant. We base our implementation of TD3 on the original author's code from Fujimoto et al. (2018) . Like in offline experiments, hyper-parameters were left as default except for β, which we tuned for each environment. For online experiments we swept over [1, 2, 5] for X -SAC and TD3. We found that these values did not work as well for TD3 -DQ, and swept over values [3, 4, 10, 20] . In online



In continuous action spaces, the sum over actions is replaced with an integral over the distribution µ. Bounded random variables are sub-Gaussian (Young, 2020) which have exponential tails. The same holds for soft-MDPs as log-sum-exp can be expanded as a max over i.i.d. Gumbel random vars. We add -1 to make the loss 0 for a perfect fit, as e x -x -1 ≥ 0 with equality at x = 0. In fact, theorems of CQL (Kumar et al., 2020) hold for our objective by replacing DCQL with DKL. Choosing µ to be uniform U gives the regular SAC update.



Figure 1: Bellman errors from SAC on Cheetah-Run (Tassa et al., 2018). The Gumbel distribution better captures the skew versus the Gaussian. Plots for TD3 and more environments can be found in Appendix D.

Figure 2: Left: The pdf of the Gumbel distribution with µ = 0 and different values of β. Center: Our Gumbel loss for different values of β. Right: Gumbel regression applied to a two-dimensional random variable for different values of β. The smaller the value of β, the more the regression fits the extrema.

Figure 3: Results on the DM Control for SAC and TD3 based versions of Extreme Q Learning.

Figure5: Additional plots of the error distributions of SAC for different environments. We find that the Gumbel distribution strongly fit the errors in first two environments, Cheetah and Walker, but provides a worse fit in the Hopper environment. Nonetheless, we see performance improvements in Hopper using our approach.

Figure 7: Offline RL Results. We show the returns vs number of training iterations for the D4RL benchmark, averaged over 6 seeds. For a fair comparison, we use batch size of 1024 for each method. XQL Tuned tunes the temperature for each environment, whereas XQL consistent uses a default temperature.

Averaged normalized scores on MuJoCo locomotion and Ant Maze tasks. X -QL-C gives results with the same consistent hyper-parameters in each domain, and X -QL-T gives results with per-environment β and hyper-parameter tuning. We see very fast convergence for our method on some tasks, and saturate performance at half the iterations as IQL.Our offline results with fixed hyperparameters for each domain outperform prior methods(Chen et  al., 2021; Kumar et al., 2019; 2020; Kostrikov et al., 2021; Fujimoto & Gu, 2021) in several environments, reaching state-of-the-art on the Franka Kitchen tasks, as shown in Table

Finetuning results on the AntMaze environments

Most similar to our work, IQL(Kostrikov et al., 2021) fits expectiles of the Q-function of the behavior policy, but is not motivated to solve a particular problem or remain conservative. On the other hand, conservatism in CQL(Kumar et al., 2020) is motivated by lower-bounding the Q-function. Our method shares the best of both worlds -like IQL we do not evaluate the Q-function on out of distribution actions and like CQL we enjoy the benefits of conservatism. Compared to CQL, our approach uses a KL constraint with the behavior policy, and for the first time extends soft-Q learning to offline RL without needing a policy or explicit entropy values. Our choice of using the reverse KL divergence for offline RL follows closely with BRAC(Wu et al., 2019) but avoids learning a policy during training.

Evaluation on Adroit tasks from D4RL. X -QL-C gives results with the same hyper-parameters used in the Franka Kitchen as IQL, and X -QL-T gives results with per-environment β and hyper-parameter tuning.

Acknowledgements

Div derived the theory for Extreme Q-learning and Gumbel regression framework and ran the tuned offline RL experiments. Joey ran the consistent offline experiments and online experiments. Both authors contributed equally to paper writing.We thank John Schulman and Bo Dai for helpful discussions. Our research was supported by NSF(1651565), AFOSR (FA95501910024), ARO (W911NF-21-1-0125), ONR, CZ Biohub, and a Sloan Fellowship. Joey was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program.

funding

://div99.github.io/XQL/

annex

This bound also becomes tighter with increasing β, and asymptotically behaves as Xmax Z β 2 log(1/δ) N .

C EXTREME Q-LEARNING

In this section we provide additional theoretical details of our algorithm, X -QL, and its connection to conservatism in CQL (Kumar et al., 2020) .C.1 X -QL For the soft-Bellman equation given as:we have the fixed-point characterization, that can be found with a recurrence:In the main paper we discuss the case of X -QL under stochastic dynamics which requires the estimation of B * . Under deterministic dynamic, however, this can be avoided as we do not need to account for an expectation over the next states. This simplifies the bellman equations. We develop two simple algorithms for this case without needing B * .Value Iteration. We can write the value-iteration objective as:J (θ) = E s∼ρµ,a∼µ(•|s) e (Q(s,a)-V θ (s))/β -(Q(s, a) -V θ (s))/β -1 .(37)Here, we learn a single model of the values V θ (s) to directly solve Equation 35. For the current value estimate V θ (s), we calculate targets r(s, a) + γV θ (s) and find a new estimate V ′ θ (s) by fitting L β µ with our objective J . Using our Gumbel Regression framework, we can guarantee that as J finds a consistent estimate of the L β µ , and V θ (s) will converge to the optimal V (s) upto some sampling error.Q-Iteration. Alternatively, we can develop a Q-iteration objective solving the recurrence:where we can rescale β to γβ to move L out.This gives the objective:Thus, this gives a method to directly estimate Q θ without learning values, and forms our X -TD3 method in the main paper. Note, that β is a hyperparameter, so we can use an alternative hyperparameter β ′ = γβ to simplify the above.We can formalize this as a Lemma in the deterministic case: Additional plots of the error distributions for SAC and TD3 can be found in Figure 5 and Figure 6 , respectively. Figure 1 and the aforementioned plots were generated by running RL algorithms for 100,000 timesteps and logging the bellman errors every 5,000 steps. In particular, the Bellman errors were computed as: r(s, a)In the above equation Q θ1 represents the first of the two Q networks used in the Double Q trick. We do not use target networks to compute the bellman error, and instead compute the fully online quantity. π ψ (s ′ ) represents the mean or deterministic output of the current policy distribution. We used an implementation of SAC based on Yarats & Kostrikov (2020) and an implementation of TD3 based on Fujimoto et al. ( 2018). For SAC we did the entropy term was not added when computing the error as we seek to characterize the standard bellman error and not the soft-bellman error. Before generating plots the errors were clipped to the ranges shown. This tended prevented over-fitting to large outliers. The Gumbel and Gaussian curves we fit using MLE via Scipy.

D.3 NUMERIC STABILITY

In practice, a naive implementation of the Gumbel loss function J from Equation 11 suffers from stability issues due to the exponential term. We found that stabilizing the loss objective was essential for training. Practically, we follow the common max-normalization trick used in softmax computation. This amounts to factoring out e maxz z from the loss and consequently scaling the gradients. This adds a per-batch adaptive normalization to the learning rate. We additionally clip loss inputs that are too large to prevent outliers. An example code snippet in Pytorch is included below: In some experiments we additionally clip the value of the gradients for stability.

D.4 OFFLINE EXPERIMENTS

In this subsection, we provide additional results in the offline setting and hyper-parameter and implementation details.Table 3 shows results for the Androit benchmark in D4RL. Again, we see strong results for X -QL, where X -QL-C with the same hyperparameters as used in the Franka Kitchen environments surpasses prior works on five of the eight tasks. Figure 7 shows learning curves which include baseline methods.We see that X -QL exhibits extremely fast convergence, particularly when tuned. One issue however, is numerical stability. The untuned version of X -QL exhibits divergence on the Antmaze environment.We base our implementation of X -QL off the official implementation of IQL from Kostrikov et al. (2021) . We use the same network architecture and also apply the Double-Q trick. We also apply the experiments we used an exponential clip value of 8. For SAC we ran three seeds in each environment as it tended to be more stable. For TD3 we ran four. Occasionally, our X -variants would experience instability due to outliers in collected online policy rollouts causing exploding loss terms. We see this primarily in the Hopper and Quadruped environments, and rarely for Cheetah or Walker. For Hopper and Quadruped, we found that approximately one in six runs became unstable after about 100k gradient steps. This sort of instability is also common in other online RL algorithms like PPO due to noisy online policy collection. We restarted runs that become unstable during training. We verified our SAC results by comparing to Yarats & Kostrikov (2020) and our TD3 results by comparing to Li (2021) . We found that our TD3 implementation performed marginally better overall.Published as a conference paper at ICLR 2023 7) 256 ( 256) 1 (1) antmaze-umaze-diverse-v0 0.6 (5) 7 ( 7) 256 ( 256) 1 (1) antmaze-medium-play-v0 0.6 (0.8) 7 ( 7) 256 ( 1024) 1 ( 2) antmaze-medium-diverse-v0 0.6 (0.6) 7 ( 7) 256 ( 256) 1 ( 4) antmaze-large-play-v0 0.6 (0.6) 7 ( 5) 256 ( 1024) 1 (1) antmaze-large-diverse-v0 0.6 (0.6) 7 ( 5) 256 ( 1024) 1 (1) kitchen-complete-v0 5 (2) 7 ( 7) 256 ( 1024) 1 (1) kitchen-partial-v0 5 ( 5) 7 ( 7) 256 ( 1024) 1 (1) kitchen-mixed-v0 5 (8) 7 ( 7) 256 ( 1024) 1 (1) pen-human-v0 5 ( 5) 7 ( 7) 256 ( 256) 1 (1) hammer-human-v0 5 (0.5) 7 ( 3) 256 ( 1024) 1 ( 4) door-human-v0 5 (1) 7 ( 5) 256 ( 256) 1 (1) relocate-human-v0 5 (0.8) 7 ( 5) 256 ( 1024) 1 ( 2) pen-cloned-v0 5 (0.8) 7 ( 5) 256 Published as a conference paper at ICLR 2023 benchmark in D4RL (Averaged over 6 seeds). X -QL Tuned gives results after hyper-parameter tuning to reduce run variance for each environment, and X -QL consistent uses the same hyper-parameters for every environment. On some environments the "consistent" hyperparameters did best.

