POLICY EXPANSION FOR BRIDGING OFFLINE-TO-ONLINE REINFORCEMENT LEARNING

Abstract

Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline. In this work, we introduce a policy expansion scheme for this task. After learning the offline policy, we use it as one candidate policy in a policy set. We then expand the policy set with another policy which will be responsible for further learning. The two policies will be composed in an adaptive manner for interacting with the environment. With this approach, the policy previously learned offline is fully retained during online learning, thus mitigating the potential issues such as destroying the useful behaviors of the offline policy in the initial stage of online learning while allowing the offline policy participate in the exploration naturally in an adaptive manner. Moreover, new useful behaviors can potentially be captured by the newly added policy through learning. Experiments are conducted on a number of tasks and the results demonstrate the effectiveness of the proposed approach.

1. INTRODUCTION

Reinforcement learning (RL) has shown great potential in various fields, reaching or even surpassing human-level performances on many tasks (e.g. Mnih et al., 2015; Silver et al., 2017; Schrittwieser et al., 2019; Tsividis et al., 2021) . However, since the policy is learned from scratch for a given task in the standard setting, the number of samples required by RL for successfully solving a task is usually large, which limits its applicability in many practical scenarios such as robotics, where physical interaction and data collection has a non-trivial cost. In many cases, there is a good amount of offline data that has already been available (Kober et al., 2013; Rastgoftar et al., 2018; Cabi et al., 2019) , e.g., collected during previous iterations of experiments or from human (e.g. for the task of driving). Instead of ab initio learning as in the common RL setting, how to effectively leverage the already available offline data for helping with online policy learning is an interesting and open problem (Vecerík et al., 2017; Hester et al., 2018; Nair et al., 2018) . Offline RL is an active recent direction that aims to learn a policy by purely using the offline data, without any further online interactions (Fujimoto et al., 2019; Kumar et al., 2020; Fujimoto & Gu, 2021; Levine et al., 2020; Ghosh et al., 2022; Chen et al., 2021; Janner et al., 2021; Yang et al., 2021; Lu et al., 2022; Zheng et al., 2022) . It holds the promise of learning from suboptimal data and improving over the behavior policy that generates the dataset (Kumar et al., 2022) , but its performance could still be limited because of its full reliance on the provided offline data. To benefit from further online learning, one possible way is to pre-train with offline RL, and warm start the policy of an online RL algorithm to help with learning and exploration when learning online. While this pre-training + fine-tuning paradigm is natural and intuitive, and has received great success in many fields like computer vision (Ge & Yu, 2017; Kornblith et al., 2019) and natural language processing (Devlin et al., 2018; Radford & Narasimhan, 2018; Brown et al., 2020) , it is less widely used in RL. Many early attempts in RL community report a number of negative results along this direction. For example, it has been observed that initializing the policy with offline pre-training and then fine-tuning the policy with standard online RL algorithms (e.g. SAC (Haarnoja et al., 2018)) sometimes suffers from non-recoverable performance drop under certain settings (Nair et al., 2020; Uchendu et al., 2022) , potentially due to the distribution shift between offline and online stages and the change of learning dynamics because of the algorithmic switch. Another possible way is to use the same offline RL algorithm for online learning. However, it has been observed that standard offline RL methods generally are not effective in fine-tuning with online data, due to reasons such as conservativeness of the method (Nair et al., 2020) . Some recent works in offline RL also start to focus on the offline-pre-training + online fine-tuning paradigm (Nair et al., 2020; Kostrikov et al., 2022) . For this purpose, they share the common philosophy of designing an RL algorithm that is suitable for both offline and online phases. Because of the unified algorithm across phases, the network parameters (including those for both critics and actor) trained in the offline phase can be reused for further learning in the online phase. Our work shares the same objective of designing effective offline-to-online training schemes. However, we take a different perspective by focusing on how to bridge offline-online learning, and not on developing yet another offline or online RL method, which is orthogonal to the focus of this work. We will illustrate the idea concretely by instantiating our proposed scheme by applying it on existing RL algorithms (Kostrikov et al., 2022; Haarnoja et al., 2018) . The contributions of this work are: • we highlight the value of properly connecting existing offline and online RL methods in order to enjoy the best of both worlds, a perspective that is alternative and orthogonal to developing completely new RL algorithms; • we propose a simple scheme termed as policy expansion for bridging offline and online reinforcement learning. The proposed approach is not only able to preserve the behavior learned in the offline stage, but can also leverage it adaptively during online exploration and along the process of learning; • we verify the effectiveness of the proposed approach by conducting extensive experiments on various tasks and settings, with comparison to a number of baseline methods.

2. PRELIMINARIES

We briefly review some related basics in this section, first on model-free RL for online policy learning, and then on policy learning from offline dataset.

2.1. ONLINE REINFORCEMENT LEARNING

Standard model-free RL methods learn a policy that maps the current state s to a distribution of action a as π(s). The policy is typically modeled with a neural network π θ (s) with θ denoting the learnable parameters. To train this policy, there are different approaches including on-policy (Sutton et al., 2000; Schulman et al., 2017) and off-policy RL methods (Lillicrap et al., 2016; Haarnoja et al., 2018; Fujimoto et al., 2018; Zhang et al., 2022) . In this work, we mainly focus on off-policy RL for online learning because of its higher sample efficiency. Standard off-policy RL methods rely on the state-action value function Q(s, a) using TD-learning: Q(s, a) = r(s, a) + γE s ∼T (s,a),a ∼π θ (s ) Q(s , a ) , where T (s, a) denotes the dynamics function and r(s, a) the reward. γ ∈ (0, 1) is a discount factor. By definition, Q(s, a) represents the accumulated discounted future reward starting from s, taking action a, and then following policy π θ thereafter. The optimization of θ is achieved by maximizing the following function: max θ E s∼D E a∼π θ Q(s, a). where D denotes replay buffer for storing online trajectories. In the typical RL setting, learning is conducted from scratch by initializing all parameters randomly and interacting with the world with the randomly policy. Q(s, a) can be implemented as a neural network Q φ (s, a) with parameter φ.

2.2. POLICY LEARNING FROM OFFLINE DATASET

Policy learning from offline datasets has been investigated from different perspectives. Given expertlevel demonstration data, behavior cloning (BC) (Pomerleau, 1988; Bain & Sammut, 1996) is an

