DIFFUSION POLICIES AS AN EXPRESSIVE POLICY CLASS FOR OFFLINE REINFORCEMENT LEARNING

Abstract

Offline reinforcement learning (RL), which aims to learn an optimal policy using a previously collected static dataset, is an important paradigm of RL. Standard RL methods often perform poorly in this regime due to the function approximation errors on out-of-distribution actions. While a variety of regularization methods have been proposed to mitigate this issue, they are often constrained by policy classes with limited expressiveness that can lead to highly suboptimal solutions. In this paper, we propose representing the policy as a diffusion model, a recent class of highly-expressive deep generative models. We introduce Diffusion Qlearning (Diffusion-QL) that utilizes a conditional diffusion model to represent the policy. In our approach, we learn an action-value function and we add a term maximizing action-values into the training loss of the conditional diffusion model, which results in a loss that seeks optimal actions that are near the behavior policy. We show the expressiveness of the diffusion model-based policy, and the coupling of the behavior cloning and policy improvement under the diffusion model both contribute to the outstanding performance of Diffusion-QL. We illustrate the superiority of our method compared to prior works in a simple 2D bandit example with a multimodal behavior policy. We then show that our method can achieve state-of-the-art performance on the majority of the D4RL benchmark tasks.

1. INTRODUCTION

Offline reinforcement learning (RL), also known as batch RL, aims at learning effective policies entirely from previously collected data without interacting with the environment (Lange et al., 2012; Fujimoto et al., 2019) . Eliminating the need for online interaction with the environment makes offline RL attractive for a wide array of real-world applications, such as autonomous driving and patient treatment planning, where real-world exploration with an untrained policy is risky, expensive, or time-consuming. Instead of relying on real-world exploration, offline RL emphasizes the use of prior data, such as human demonstration, that is often available at a much lower cost than online interactions. However, relying only on previously collected data makes offline RL a challenging task. Applying standard policy improvement approaches to an offline dataset typically leads to relying on evaluating actions that have not been seen in the dataset, and therefore their values are unlikely to be estimated accurately. For this reason, naive approaches to offline RL typically learn poor policies that prefer out-of-distribution actions whose values have been overestimated, resulting in unsatisfactory performance (Fujimoto et al., 2019) . Previous work on offline RL generally addressed this problem in one of four ways: 1) regularizing how far the policy can deviate from the behavior policy (Fujimoto et al., 2019; Fujimoto & Gu, 2021; Kumar et al., 2019; Wu et al., 2019; Nair et al., 2020; Lyu et al., 2022) ; 2) constraining the learned value function to assign low values to out-of-distribution actions (Kostrikov et al., 2021a; Kumar et al., 2020) ; 3) introducing model-based methods, which learn a model of the environment dynamics and perform pessimistic planning in the learned Markov decision process (MDP) (Kidambi

