KNOW YOUR BOUNDARIES: THE ADVANTAGE OF EXPLICIT BEHAVIORAL CLONING IN OFFLINE RL

Abstract

We introduce an offline reinforcement learning (RL) algorithm that explicitly clones a behavior policy to constrain value learning. In offline RL, it is often important to prevent a policy from selecting unobserved actions, since the consequence of these actions cannot be presumed without additional information about the environment. One straightforward way to implement such a constraint is to explicitly model a given data distribution via behavior cloning and directly force a policy not to select uncertain actions. However, many offline RL methods instantiate the constraint indirectly-for example, pessimistic value estimation-due to a concern about errors when modeling a potentially complex behavior policy. In this work, we argue that it is not only viable but beneficial to explicitly model the behavior policy for offline RL because the constraint can be realized in a stable way with the explicitly cloned model. We first suggest a theoretical framework that allows us to incorporate behavior-cloned models into value-based offline RL methods, enjoying the strength of both explicit behavior cloning and value learning. Then, we propose a practical method utilizing a score-based generative model for behavior cloning to better handle the complicated behaviors that an offline RL dataset might contain. The proposed method shows state-of-the-art performance on several datasets within the D4RL and Robomimic benchmarks and achieves competitive performance across all datasets tested.

1. INTRODUCTION

The goal of offline reinforcement learning (RL) is to learn a policy purely from pre-generated data. This data-driven RL paradigm is promising since it opens up a possibility for RL to be widely applied to many realistic scenarios where large-scale data is available. Two primary targets need to be considered in designing offline RL algorithms: maximizing reward and staying close to the provided data. Finding a policy that maximizes the accumulated sum of rewards is the main objective in RL, and this can be achieved via learning an optimal Q-value function. However, in the offline setup, it is often infeasible to infer a precise optimal Q-value function due to limited data coverage (Levine et al., 2020; Liu et al., 2020) ; for example, the value of states not shown in the dataset cannot be estimated without additional assumptions about the environment. This implies that value learning can typically be performed accurately only for the subset of the state (or state-action) space covered by a dataset. Because of this limitation, offline RL algorithms should implement some form of imitation learning objectives that can force a policy to stay close to the given data. Because of this limitation, some form of imitation learning objectives that can force a policy to stay close to the given data warrants consideration in offline RL. Recently, many offline RL algorithms have been proposed that instantiate an imitation learning objective without explicitly modeling the data distribution of the provided dataset. For instance, one approach applies the pessimism under uncertainty principle in value learning (Buckman et al., 2020; Kumar et al., 2020; Kostrikov et al., 2021a) in order to prevent out-of-distribution actions from being selected. While these methods show promising practical results for certain domains, it has also been reported that such methods fall short compared to simple behavior cloning methods (Mandlekar et al., 2021; Florence et al., 2021) which only model the data distribution without exploiting any reward information. We hypothesize that this deficiency occurs because the imitation learning objective in these methods is only indirectly realized without explicitly modeling the data distribution (e.g. by pessimistic value prediction). Such an indirect realization could be much more complicated than simple behavior cloning for some data distributions since it is often entangled with unstable training dynamics caused by bootstrapping and function approximation. Hence, implicit methods are prone to over-regularization (Kumar et al., 2021) or failure, and they may require delicate hyperparameter tuning to prevent this deficiency (Emmons et al., 2022) . Yet, at the same time, it is obvious that simple behavior cloning cannot extract a good policy from a data distribution composed of suboptimal policies. To this end, we ask the following question in this paper: Can offline RL benefit from explicitly modeling the data distribution via behavior cloning no matter what kind of data distribution is given? Previously, there have been attempts to use an explicitly trained behavior cloning model in offline RL (Wu et al., 2019; Kumar et al., 2019; Fujimoto et al., 2019; Liu et al., 2020 ), but we argue that two important elements are missing from existing algorithms. First, high-fidelity behavior cloning has not been achieved, despite the need in offline RL for precise estimation of behavior policy (Levine et al., 2020) . First, high-fidelity generative models have not been integrated with offline RL algorithms even though inaccurate estimation of behavior policy could limit the final performance of the algorithm (Levine et al., 2020) . Florence et al. ( 2021) have tried an energy-based generative model, but the proposed method is an imitation learning that does not incorporate a value function. Second, the trained behavior cloning models have only been utilized with heuristics or proxy formulations that are only empirically justified (Wu et al., 2019; Kumar et al., 2019) . Second, the trained behavior cloning models have only been utilized with certain limited forms, such as KL (Wu et al., 2019) or MMD (Kumar et al., 2019) divergence between the cloned policy and an actor policy. Therefore, we tackle these two problems by: first, incorporating a state-of-the-art score-based generative model (Song & Ermon, 2019; 2020; Song et al., 2021) to fulfill the high-fidelity required for offline RL, and second, by proposing a theoretical framework, direct Q-penalization (DQP), that provides a mechanism to integrate the trained behavior model into value learning. Furthermore, DQP can provide an integrated view of different offline RL algorithms, helping to analyze the possible failures of these algorithms. We evaluate our algorithm on various benchmark datasets that differ in quality and complexity, namely D4RL and Robomimic. Our method shows not only competitive performance across different types of datasets but also state-of-the-art results on complex contact-rich tasks, such as the transport and tool-hang tasks in Robomimic. The results demonstrate the effectiveness of the proposed algorithm as well as the practical advantage of explicit behavior cloning, which was previously considered a bottleneck that would limit the final offline RL performance (Levine et al., 2020) unnecessary or infeasible. To summarize, our contributions are: (1) We provide a theoretical framework for offline RL, DQP, which provides a unified view of previously disparate offline RL algorithms; (2) Using DQP, we suggest a principled offline RL formulation that incorporates an explicitly trained behavior cloning model; (3) We propose a practical algorithm that instantiates the above formulation, leveraging a score-based generative model; and (4) we achieve competitive and state-of-the-art performance across a variety of offline RL datasets.

2. RELATED WORKS

The end goal of offline RL is to extract the best possible policy from a given dataset, regardless of the quality of the trajectories that compose the dataset (Ernst et al., 2005; Riedmiller, 2005; Lange et al., 2012; Levine et al., 2020) . One of the simplest approaches to tackle this problem is imitation learning (IL) (Schaal, 1999; Florence et al., 2021) hoping to recover the performance of the behavior policy which generated the dataset. However, simple imitation would fail to achieve the end goal of offline RL since one cannot outperform the behavior policy by just imitating it. This problem is commonly addressed with value learning, trying to resolve the distribution shift problem that arises in the offline setup. Since distribution shift commonly results in overestimation of values, offline RL algorithms try to estimate values pessimistically for out-of-distribution inputs (Kumar et al., 2020; Goo & Niekum, 2021) , sometimes by explicitly quantifying the certainty with a trained transition dynamics model (Yu et al., 2020; Kidambi et al., 2020) , a generative model (Rezaeifar et al., 2021) , or a pseudometric (Dadashi et al., 2021) . The distribution shift is also commonly addressed by constraining a policy to be close to the behavior one. Specifically, based on

