MODEM: ACCELERATING VISUAL MODEL-BASED REINFORCEMENT LEARNING WITH DEMONSTRATIONS

Abstract

Poor sample efficiency continues to be the primary challenge for deployment of deep Reinforcement Learning (RL) algorithms for real-world applications, and in particular for visuo-motor control. Model-based RL has the potential to be highly sample efficient by concurrently learning a world model and using synthetic rollouts for planning and policy improvement. However, in practice, sample-efficient learning with model-based RL is bottlenecked by the exploration challenge. In this work, we find that leveraging just a handful of demonstrations can dramatically improve the sample-efficiency of model-based RL. Simply appending demonstrations to the interaction dataset, however, does not suffice. We identify key ingredients for leveraging demonstrations in model learning -policy pretraining, targeted exploration, and oversampling of demonstration data -which forms the three phases of our model-based RL framework. We empirically study three complex visuo-motor control domains and find that our method is 160% -250% more successful in completing sparse reward tasks compared to prior approaches in the low data regime (100k interaction steps, 5 demonstrations). Code and videos are available at https://nicklashansen.github.io/modemrl.



Figure 1 . Success rate (%) in sparse reward tasks. Given only 5 human demonstrations and a limited amount of online interaction, our method significantly improves success rate on 21 hard robotics tasks from pixels -including dexterous manipulation, pick-and-place, and locomotion -compared to strong baselines. Reinforcement Learning (RL) provides a principled and complete abstraction for training agents in unknown environments. However, poor sample efficiency of existing algorithms prevent their applicability for real-world tasks like object manipulation with robots. This is further exacerbated in visuo-motor control tasks which present both the challenges of visual representation learning as well as motor control. Model-based RL (MBRL) can in principle (Brafman & Tennenholtz, 2002) improve the sample efficiency of RL by concurrently learning a world model and policy (Ha & Schmidhuber, 2018; Ecoffet et al., 2019; Schrittwieser et al., 2020; Hafner et al., 2020; Hansen et al., 2022a) . Use of imaginary rollouts from the learned model can reduce the need for real environment interactions, and thus improve sample efficiency. However, a series of practical challenges like the difficulty of exploration, the need for shaped rewards, and the need for a high-quality visual representation, prevent MBRL from realizing its full potential. In this work, we seek to overcome these challenges from a practical standpoint, and we propose to do so by using expert demonstrations to accelerate MBRL. Expert demonstrations for visuo-motor control tasks can be collected using human teleoperation, kinesthetic teaching, or scripted policies. While demonstrations provide direct supervision for learning complex behaviors, they can be costly to collect in large quantities (Baker et al., 2022) . However, even a small number of demonstrations can significantly accelerate RL by circumventing challenges related to exploration. Prior works have studied this in the context of model-free RL (MFRL) algorithms (Rajeswaran et al., 2018; Shah & Kumar, 2021; Zhan et al., 2020) . In this work, we propose a new framework to accelerate model-based RL algorithms with demonstrations. On a suite of challenging visuo-motor control tasks, we find that our method can train policies that are approx. 160% -250% more successful than prior state-of-the-art (SOTA) baselines. Off-Policy RL algorithms (Sutton & Barto, 1998) -both model-based and model-free -can in principle admit any dataset in the replay buffer. Consequently, it is tempting to naïvely append demonstrations to the replay buffer of an agent. However, we show that this is a poor choice (see Section 4), since the agent still starts with a random policy and must slowly incorporate the behavioral priors inherent in the demonstrations while learning in the environment. Simply initializing the policy by behavior cloning (Pomerleau, 1988) the demonstrations is also insufficient. Any future learning of the policy is directly impacted by the quality of world model and/or critic -training of which requires sufficiently exploratory datasets. To circumvent these challenges and enable stable and monotonic, yet sample-efficient learning, we propose Model-based Reinforcement Learning with Demonstrations (MoDem), a three-phase framework for visual model-based RL using only a handful of demonstrations. Our framework is summarized in Figure 2 and consists of: • Phase 1: Policy pretraining, where the visual representation and policy are pretrained on the demonstration dataset via behavior cloning (BC). While this pretraining by itself does not produce successful policies, it provides a strong prior through initialization. • Phase 2: Seeding, where the pretrained policy, with added exploration, is used to collect a small dataset from the environment. This dataset is used to pretrain the world model and critic. Empirically, data collected by the pretrained policy is far more useful for model and critic learning than random policies used in prior work, and is key to the success of our work as it ensures that the world model and critic benefit from the inductive biases provided by demonstrations. Without this phase, interactive learning can quickly cause policy collapse after the first few iterations of training, consequently erasing the benefits of policy pretraining. • Phase 3: Finetuning with interactive learning, where we interleave policy learning using synthetic rollouts and world model learning using data from all three phases including fresh environment interactions. Crucially, we aggressively oversample demonstration data during world model learning, and regularize with data augmentation in all phases. Our Contributions. Our primary contribution in this work is the development of MoDem, which we evaluate on 18 challenging visual manipulation tasks from Adroit (Rajeswaran et al., 2018) and Meta-World (Yu et al., 2019) suites with only sparse rewards, as well as locomotion tasks from DMControl (Tassa et al., 2018) that use dense rewards. Measured in terms of policy success after 100k interaction steps (and using just 5 demonstrations), MoDem achieves 160% -250% higher relative success and 38% -50% higher absolute success compared to strong baselines. Through extensive empirical evaluations, we also elucidate the importance of each phase of MoDem, as well as the role of data augmentations and pre-trained visual representations.



Figure2. Our framework (MoDem) consists of three phases: (1) a policy pretraining phase where representation and policy is trained on a handful of demonstrations via BC, (2) a seeding phase where the pretrained policy is used to generate rollouts for targeted model learning, and (3) an interactive learning phase where the model iteratively collects new rollouts and is trained with data from all three phases. Crucially, we aggressively oversample demonstration data for model learning, regularize the model using data augmentation, and reuse weights across phases. sg: stop-gradient operator.

