DECORRELATED DOUBLE Q-LEARNING

Abstract

Q-learning with value function approximation may have the poor performance because of overestimation bias and imprecise estimate. Specifically, overestimation bias is from the maximum operator over noise estimate, which is exaggerated using the estimate of a subsequent state. Inspired by the recent advance of deep reinforcement learning and Double Q-learning, we introduce the decorrelated double Q-learning (D2Q). Specifically, we introduce Q-value function utilizing control variates and the decorrelated regularization to reduce the correlation between value function approximators, which can lead to less biased estimation and low variance. The experimental results on a suite of MuJoCo continuous control tasks demonstrate that our decorrelated double Q-learning can effectively improve the performance.

1. INTRODUCTION

Q-learning Watkins & Dayan (1992) as a model free reinforcement learning approach has gained popularity, especially under the advance of deep neural networks Mnih et al. (2013) . In general, it combines the neural network approximators with the actor-critic architectures Witten (1977) ; Konda & Tsitsiklis (1999) , which has an actor network to control how the agent behaves and a critic to evaluate how good the action taken is. The Deep Q-Network (DQN) algorithm Mnih et al. (2013) firstly applied the deep neural network to approximate the action-value function in Q-learning and shown remarkably good and stable results by introducing a target network and Experience Replay buffer to stabilize the training. Lillicrap et al. proposes DDPG Lillicrap et al. (2015) , which extends Q-learning to handle continuous action space with target networks. Except the training stability, another issue Q-learning suffered is overestimation bias, which was first investigated in Thrun & Schwartz (1993) . Because of the noise in function approximation, the maximum operator in Q-learning can lead to overestimation of state-action values. And, the overestimation property is also observed in deterministic continuous policy control Silver & Lever (2014) . In particular, with the imprecise function approximation, the maximization of a noisy value will induce overestimation to the action value function. This inaccuracy could be even worse (e.g. error accumulation) under temporal difference learning Sutton & Barto (1998), in which bootstrapping method is used to update the value function using the estimate of a subsequent state. Given overestimation bias caused by maximum operator of noise estimate, many methods have been proposed to address this issue. Double Q-learning van Hasselt ( 2010 This work suggests an alternative solution to the overestimation phenomena, called decorrelated double Q-learning, based on reducing the noise estimate in Q-values. On the one hand, we want to make the two value function approximators as independent as possible to mitigate overestima-1



) mitigates the overestimation effect by introducing two independently critics to estimate the maximum value of a set of stochastic values. Averaged-DQN Anschel et al. (2017) takes the average of previously learned Q-values estimates, which results in a more stable training procedure, as well as reduces approximation error variance in the target values. Recently, Twin Delayed Deep Deterministic Policy Gradients (TD3) Fujimoto et al. (2018) extends the Double Q-learning, by using the minimum of two critics to limit the overestimated bias in actor-critic network. A soft Q-learning algorithm Haarnoja et al. (2018), called soft actor-critic, leverages the similar strategy as TD3, while including the maximum entropy to balance exploration and exploitation. Maxmin Q-learning Lan et al. (2020) proposes the use of an ensembling scheme to handle overestimation bias in Q-Learning.

