HOW DOES VALUE DISTRIBUTION IN DISTRIBU-TIONAL REINFORCEMENT LEARNING HELP OPTI-MIZATION?

Abstract

We consider the problem of learning a set of probability distributions from the Bellman dynamics in distributional reinforcement learning (RL) that learns the whole return distribution compared with only its expectation in classical RL. Despite its success to obtain superior performance, we still have a poor understanding of how the value distribution in distributional RL works. In this study, we analyze the optimization benefits of distributional RL by leveraging its additional value distribution information over classical RL in the Neural Fitted Z-Iteration (Neural FZI) framework. To begin with, we demonstrate that the distribution loss of distributional RL has desirable smoothness characteristics and hence enjoys stable gradients, which is in line with its tendency to promote optimization stability. Furthermore, the acceleration effect of distributional RL is revealed by decomposing the return distribution. It turns out that distributional RL can perform favorably if the value distribution approximation is appropriate, measured by the variance of gradient estimates in each environment for any specific distributional RL algorithm. Rigorous experiments validate the stable optimization behaviors of distributional RL, contributing to its acceleration effects compared to classical RL. The findings of our research illuminate how the value distribution in distributional RL algorithms helps the optimization.

1. INTRODUCTION

Distributional reinforcement learning (Bellemare et al., 2017a; Dabney et al., 2018b; a; Yang et al., 2019; Zhou et al., 2020; Nguyen et al., 2020; Luo et al., 2021; Sun et al., 2022) characterizes the intrinsic randomness of returns within the framework of Reinforcement Learning (RL). When the agent interacts with the environment, the intrinsic uncertainty of the environment seeps in the the stochasticity of rewards the agent receives and the inherently chaotic state and action dynamics of physical interaction, increasing the difficulty of the RL algorithm design. Distributional RL is aimed at representing the entire distribution of returns in order to capture more intrinsic uncertainty of the environment, and therefore to use these value distributions to evaluate and optimize the policy. This is in stark contrast to the classical RL that only focuses on the expectation of the return distributions, such as temporal-difference (TD) learning (Sutton & Barto, 2018) and Q-learning (Watkins & Dayan, 1992) . As a promising branch of RL algorithms, distributional RL has demonstrated the state-of-the-art performance in a wide range of environments, e.g., Atari games, in which the representation of return distributions and the distribution divergence between the current and target return distributions within each Bellman update are pivotal to its empirical success (Dabney et al., 2018a; Sun et al., 2021b; 2022) . Specifically, categorical distributional RL, e.g., C51 (Bellemare et al., 2017a; Rowland et al., 2018) , integrates a categorical distribution by approximating the density probabilities in pre-specified bins with a bounded range and Kullback-Leibler (KL) divergence, serving as the first successful distributional RL family in recent years. Quantile Regression (QR) distributional RL, e.g., QR-DQN (Dabney et al., 2018b) , approximates Wasserstein distance by the quantile regression loss and leverages quantiles to represent the whole return distribution. Other variants of QR-DQN, including Implicit Quantile Networks (IQN) (Dabney et al., 2018a) and Fully parameterized Quantile Function (FQF) (Yang et al., 2019) , can even achieve significantly better performance across

