RLX2: TRAINING A SPARSE DEEP REINFORCEMENT LEARNING MODEL FROM SCRATCH

Abstract

Training deep reinforcement learning (DRL) models usually requires high computation costs. Therefore, compressing DRL models possesses immense potential for training acceleration and model deployment. However, existing methods that generate small models mainly adopt the knowledge distillation-based approach by iteratively training a dense network. As a result, the training process still demands massive computing resources. Indeed, sparse training from scratch in DRL has not been well explored and is particularly challenging due to non-stationarity in bootstrap training. In this work, we propose a novel sparse DRL training framework, "the Rigged Reinforcement Learning Lottery" (RLx2), which builds upon gradient-based topology evolution and is capable of training a DRL model based entirely on sparse networks. Specifically, RLx2 introduces a novel delayed multistep TD target mechanism with a dynamic-capacity replay buffer to achieve robust value learning and efficient topology exploration in sparse models. It also reaches state-of-the-art sparse training performance in several tasks, showing 7.5×-20× model compression with less than 3% performance degradation and up to 20× and 50× FLOPs reduction for training and inference, respectively.

1. INTRODUCTION

Deep reinforcement learning (DRL) has found successful applications in many important areas, e.g., games (Silver et al., 2017) , robotics (Gu et al., 2017) and nuclear fusion (Degrave et al., 2022) . However, training a DRL model demands heavy computational resources. For instance, AlphaGo-Zero for Go games (Silver et al., 2017) , which defeats all Go-AIs and human experts, requires more than 40 days of training time on four tensor processing units (TPUs). The heavy resource requirement results in expensive consumption and hinders the application of DRL on resource-limited devices. Sparse networks, initially proposed in deep supervised learning, have demonstrated great potential for model compression and training acceleration of deep reinforcement learning. Specifically, in deep supervised learning, the state-of-the-art sparse training frameworks, e.g., SET (Mocanu et al., 2018) and RigL (Evci et al., 2020) , can train a 90%-sparse network (i.e., the resulting network size is 10% of the original network) from scratch without performance degradation. On the DRL side, existing works including Rusu et al. ( 2016 2019) succeeded in generating ultimately sparse DRL networks. Yet, their approaches still require iteratively training dense networks, e.g., pre-trained dense teachers may be needed. As a result, the training cost for DRL remains prohibitively high, and existing methods cannot be directly implemented on resource-limited devices, leading to low flexibility in adapting the compressed DRL models to new environments, i.e., on-device models have to be retrained at large servers and re-deployed. Training a sparse DRL model from scratch, if done perfectly, has the potential to significantly reduce computation expenditure and enable efficient deployment on resource-limited devices, and achieves excellent flexibility in model adaptation. However, training an ultra sparse network (e.g., 90% sparsity) from scratch in DRL is challenging due to the non-stationarity in bootstrap training. Specifically, in DRL, the learning target is not fixed but evolves in a bootstrap way (Tesauro et al., 1995) , and the distribution of the training data can also be non-stationary (Desai et al., 2019) . Moreover, using a sparse network structure means searching in a smaller hypothesis space, which further reduces the learning target's confidence. As a result, improper sparsification can cause irreversible damage to the learning path (Igl et al., 2021) , resulting in poor performance. Indeed, recent works (Sokar et al., 2021; Graesser et al., 2022) show that a direct adoption of a dynamic sparse training (DST) framework in DRL still fails to achieve good compression of the model for different environments uniformly. Therefore, the following interesting question remains open: Can an efficient DRL agent be trained from scratch with an ultra-sparse network throughout? In this paper, we give an affirmative answer to the problem and propose a novel sparse training framework, "the Rigged Reinforcement Learning Lottery" (RLx2), for off-policy RL, which is the first algorithm to achieve sparse training throughout using sparsity of more than 90% with only minimal performance loss. RLx2 is inspired by the gradient-based topology evolution criteria in RigL (Evci et al., 2020) for supervised learning. However, a direct application of RigL does not achieve high sparsity, because sparse DRL models suffer from unreliable value estimation due to limited hypothesis space, which further disturbs topology evolution. Thus, RLx2 is equipped with a delayed multi-step Temporal Difference (TD) target mechanism and a novel dynamic-capacity replay buffer to achieve robust value learning and efficient topology exploration. These two new components address the value estimation problem under sparse topology, and together with RigL, achieve superior sparse-training performance. The main contributions of the paper are summarized as follows. • We investigate the fundamental obstacles in training a sparse DRL agent from scratch, and discover two key factors for achieving good performance under sparse networks, namely robust value estimation and efficient topology exploration. • Motivated by our findings, we propose RLx2, the first framework that enables DRL training based entirely on sparse networks. RLx2 possesses two key functions, i.e., a gradientbased search scheme for efficient topology exploration, and a delayed multi-step TD target mechanism with a dynamic-capacity replay buffer for robust value learning. • 

2. RELATED WORKS

We discuss the related works on training sparse models in deep supervised learning and reinforcement learning below. We also provide a comprehensive performance comparison in Table 1 . Chen et al., 2020; Brix et al., 2020; Chen et al., 2021) .

Sparse Models in

Many works (Bellec et al., 2017; Mocanu et al., 2018; Mostafa & Wang, 2019; Dettmers & Zettlemoyer, 2019; Evci et al., 2020) 



); Schmitt et al. (2018); Zhang et al. (

Through extensive experiments, we demonstrate the state-of-the-art sparse training performance of RLx2 with two popular DRL algorithms, TD3 (Fujimoto et al., 2018) and SAC (Haarnoja et al., 2018), on several MuJoCo (Todorov et al., 2012) continuous control tasks. Our results show up to 20× model compression. RLx2 also achieves 20× acceleration in training and 50× in inference in terms of floating-point operations (FLOPs).

also try to train a sparse neural network from scratch without having to pre-trained dense models. These works adjust structures of sparse networks during training, including Deep Rewiring (DeepR) (Bellec et al., 2017), Sparse Evolutionary Training (SET) (Mocanu et al., 2018), Dynamic Sparse Reparameterization (DSR) (Mostafa & Wang, 2019), Sparse

