INCORPORATING EXPLICIT UNCERTAINTY ESTIMATES INTO DEEP OFFLINE REINFORCEMENT LEARNING

Abstract

Most theoretically motivated work in the offline reinforcement learning setting requires precise uncertainty estimates. This requirement restricts the algorithms derived in that work to the tabular and linear settings where such estimates exist. In this work, we develop a novel method for incorporating scalable uncertainty estimates into an offline reinforcement learning algorithm called deep-SPIBB that extends the SPIBB family of algorithms to environments with larger state and action spaces. We use recent innovations in uncertainty estimation from the deep learning community to get more scalable uncertainty estimates to plug into deep-SPIBB. While these uncertainty estimates do not allow for the same theoretical guarantees as in the tabular case, we argue that the SPIBB mechanism for incorporating uncertainty is more robust and flexible than pessimistic approaches that incorporate the uncertainty as a value function penalty. We bear this out empirically, showing that deep-SPIBB outperforms pessimism based approaches with access to the same uncertainty estimates and performs at least on par with a variety of other strong baselines across several environments and datasets.

1. INTRODUCTION

In the study of offline reinforcement learning (OffRL), uncertainty plays a key role (Buckman et al., 2020; Levine et al., 2020) . This is because, unlike online RL where an agent receives feedback in the form of low rewards after taking a bad action, an OffRL agent must learn from a fixed dataset without feedback from the environment. As a result, a consistent issue for OffRL algorithms is the overestimation of states and actions that are not seen in the dataset, leading to poor performance when the agent is deployed and finds that those states and actions in fact have low reward (Fujimoto et al., 2019b) . To overcome this issue, OffRL algorithms often attempt to incorporate some notion of uncertainty to ensure that the learned policy avoids regions of high uncertainty. There are two main issues with this approach: (1) how to define uncertainty and (2) how to incorporate uncertainty estimates into the OffRL algorithm. In tabular and linear MDPs, issue (1) is resolved by using visitation counts and elliptical confidence regions, respectively (Yin et al., 2021; Yin & Wang, 2021; Jin et al., 2021; Laroche et al., 2019) . In the large-scale MDPs that we consider, neither of these solutions work, but there is a large literature from the deep learning community on uncertainty quantification that we can leverage for OffRL (Ciosek et al., 2019; Osband et al., 2018; 2021; Burda et al., 2019; Ostrovski et al., 2017; Lakshminarayanan et al., 2017; Blundell et al., 2015; Gal & Ghahramani, 2016) . Given these uncertainty estimators, this paper focuses primarily on issue (2), how to incorporate uncertainty for OffRL. To understand how to best incorporate uncertainty into an OffRL algorithm, we first provide a high level algorithmic template that captures the majority of related work as instances of modified policy iteration (Scherrer et al., 2012) that alternate between policy evaluation and policy improvement. We then can sort prior work into four categories along two axes: whether the algorithm modifies the evaluation step or the improvement step, and whether the algorithm uses an explicit uncertainty estimator or not. One class of algorithms modifies the evaluation step by introducing value penalties based on explicit uncertainty estimates, which we will call pessimism (Petrik et al., 2016; Buckman et al., 2020; Jin et al., 2021) . An alternative modifies the value estimation without using an uncertainty estimate, like in CQL (Kumar et al., 2020) . Another family uses behavior constraints that modify the policy improvement step to keep the learned policy near the behavior policy (Fujimoto et al., 

