EMAQ: EXPECTED-MAX Q-LEARNING OPERATOR FOR SIMPLE YET EFFECTIVE OFFLINE AND ONLINE RL

Abstract

Off-policy reinforcement learning (RL) holds the promise of sample-efficient learning of decision-making policies by leveraging past experience. However, in the offline RL setting -where a fixed collection of interactions are provided and no further interactions are allowed -it has been shown that standard off-policy RL methods can significantly underperform. Recently proposed methods often aim to address this shortcoming by constraining learned policies to remain close to the given dataset of interactions. In this work, we closely investigate an important simplification of BCQ (Fujimoto et al., 2018a) -a prior approach for offline RL -which removes a heuristic design choice and naturally restrict extracted policies to remain exactly within the support of a given behavior policy. Importantly, in contrast to their original theoretical considerations, we derive this simplified algorithm through the introduction of a novel backup operator, Expected-Max Q-Learning (EMaQ), which is more closely related to the resulting practical algorithm. Specifically, in addition to the distribution support, EMaQ explicitly considers the number of samples and the proposal distribution, allowing us to derive new sub-optimality bounds which can serve as a novel measure of complexity for offline RL problems. In the offline RL setting -the main focus of this work -EMaQ matches and outperforms prior state-of-the-art in the D4RL benchmarks (Fu et al., 2020a). In the online RL setting, we demonstrate that EMaQ is competitive with Soft Actor Critic (SAC). The key contributions of our empirical findings are demonstrating the importance of careful generative model design for estimating behavior policies, and an intuitive notion of complexity for offline RL problems. With its simple interpretation and fewer moving parts, such as no explicit function approximator representing the policy, EMaQ serves as a strong yet easy to implement baseline for future work.

1. INTRODUCTION

Leveraging past interactions in order to improve a decision-making process is the hallmark goal of off-policy reinforcement learning (RL) (Precup et al., 2001; Degris et al., 2012) . Effectively learning from past experiences can significantly reduce the amount of online interaction required to learn a good policy, and is a particularly crucial ingredient in settings where interactions are costly or safety is of great importance, such as robotics Gu et al. ( 2017 In recent years, with neural networks taking a more central role in the RL literature, there have been significant advances in developing off-policy RL algorithms for the function approximator setting, where policies and value functions are represented by neural networks (Mnih et al., 2015; Lillicrap et al., 2015; Gu et al., 2016b; a; Haarnoja et al., 2018; Fujimoto et al., 2018b) . Such algorithms, while off-policy in nature, are typically trained in an online setting where algorithm updates are interleaved with additional online interactions. However, in purely offline RL settings, where a dataset of interactions are provided ahead of time and no additional interactions are allowed, the performance of these algorithms degrades drastically (Fujimoto et al., 2018a; Jaques et al., 2019) . A number of recent methods have been developed to address this shortcoming of off-policy RL algorithms. A particular class of algorithms for offline RL that have enjoyed recent success are 1



); Kalashnikov et al. (2018a), health Murphy et al. (2001), dialog agents (Jaques et al., 2019), and education Mandel et al. (2014).

