EFFICIENT PLANNING IN A COMPACT LATENT ACTION SPACE

Abstract

Planning-based reinforcement learning has shown strong performance in tasks in discrete and low-dimensional continuous action spaces. However, planning usually brings significant computational overhead for decision-making, and scaling such methods to high-dimensional action spaces remains challenging. To advance efficient planning for high-dimensional continuous control, we propose Trajectory Autoencoding Planner (TAP), which learns low-dimensional latent action codes with a state-conditional VQ-VAE. The decoder of the VQ-VAE thus serves as a novel dynamics model that takes latent actions and current state as input and reconstructs long-horizon trajectories. During inference time, given a starting state, TAP searches over discrete latent actions to find trajectories that have both high probability under the training distribution and high predicted cumulative reward. Empirical evaluation in the offline RL setting demonstrates low decision latency which is indifferent to the growing raw action dimensionality. For Adroit robotic hand manipulation tasks with high-dimensional continuous action space, TAP surpasses existing model-based methods by a large margin and also beats strong model-free actor-critic baselines.

1. INTRODUCTION

Planning-based reinforcement learning (RL) methods have shown strong performance on board games (Silver et al., 2018; Schrittwieser et al., 2020 ), video games (Schrittwieser et al., 2020; Ye et al., 2021) and low-dimensional continuous control (Janner et al., 2021) . Planning conventionally occurs in the raw action space of the Markov Decision Process (MDP), by rolling-out future trajectories with a dynamics model of the environment, which is either predefined or learned. While such a planning procedure is intuitive, planning in raw action space can be inefficient and inflexible. Firstly, the optimal plan in a high-dimensional raw action space can be difficult to find. Even if the optimizer is powerful enough to find the optimal plan, it is still difficult to make sure the learned model is accurate in the whole raw action space. In such cases, the planner can exploit the weakness of the model and lead to over-optimistic planning. Secondly, planning in raw action space means the planning procedure is tied to the temporal structure of the environment. However, human planning is much more flexible, for example, humans can introduce temporal abstractions and plan with high-level actions; humans can plan backwards from the goal; the plan can also start from a high-level outline and get refined step-by-step. The limitations of raw action space planning cause slow decision speeds, which hamper adoption in real-time control. In this paper, we propose the Trajectory Autoencoding Planner (TAP), which learns a latent action space and latent-action model from offline data. A latent-action model takes state s 1 and latent actions z as input and predicts a segment of future trajectories τ = (a 1 , r 1 , R 1 , s 2 , a 2 , r 2 , R 2 , ...). This latent action space can be much smaller than the raw action space since it only captures plausible trajectories in the dataset, preventing out-of-distribution actions. Furthermore, the latent action decouples the planning from the original temporal structure of MDP. This enables the model, for example, to predict multiple steps of future trajectories with a single latent action. The model and latent action space are learned in an unsupervised manner. Given the current state, the encoder of TAP learned to map the trajectories to a sequence of discrete latent codes (and the decoder maps it back), using a state-conditioned Vector Quantised Variational AutoEncoder (VQ-VAE). As illustrated in Figure 1 (a), the distribution of these latent codes is then modelled autoregressively with a Transformer, again conditioned on the current state. During inference, instead of sampling the actions and next state sequentially, TAP samples latent codes, reconstructs the trajectory via the decoder, and executes the first action in the trajectory with the highest objective score. These latent codes of the VQ-VAE are thus latent actions and the state-conditional decoder acts as a latent-action dynamics model. In practice, TAP uses a single discrete latent variable to model multiple (L = 3) steps of transitions, creating a compact latent action space for downstream planning. Planning in this compact action space reduces the decision latency and makes high-dimensional planning easier. In addition, reconstructing the entire trajectories after all latent codes are sampled also helps alleviate the compound errors of step-by-step rollouts. We evaluate TAP extensively in the offline RL setting. Our results on low-dimensional locomotion control tasks show that TAP enjoys competitive performance as strong model-based, model-free actor-critic, and sequence modelling baselines. On tasks with higher dimensionality, TAP not only surpasses model-based methods like MOPO (Yu et al., 2020) Trajectory Transformer(TT) (Janner et al., 2021) but also significantly outperforms strong model-free ones (e.g., CQL (Kumar et al., 2020) and IQL (Kostrikov et al., 2022) ). In Figure 1 (c), we show how the relative performance between TAP and baselines changes according to the dimensionality of the action space. One can see that the advantage of TAP starts to pronounce when the dimensionality of raw actions grows and the margin becomes large for high dimensionality, especially when compared to the model-based method TT. This can be explained by the innate difficulty of policy optimization in a high-dimensional raw action space, which is avoided by TAP since its planning happens in a low-dimensional discrete latent space. At the same time, the sampling and planning of TAP are significantly faster than prior work that also uses Transformer as a dynamics model: the decision time of TAP meets the requirement of deployment on a real robot (20Hz) (Reed et al., 2022) , while TT is much slower and the latency grows along the state-action dimensionality in Figure 1(b) .

2. BACKGROUND

2.1 VECTOR QUANTISED-VARIATIONAL AUTOENCODER The Vector Quantised Variational Autoencoder (VQ-VAE) (van den Oord et al., 2017 ) is a generative model designed based on the Variational Autoencoder (VAE). There are three components in VQ-VAEs: 1) an encoder network mapping inputs to a collection of discrete latent codes; 2) a decoder that maps latent codes to reconstructed inputs; 3) a learned prior distribution over latent variables. With the prior and the decoder, we can draw samples from the VQ-VAE.



Figure 1: (a) gives an overview of TAP modelling, where blocks represent the latent actions. (b) shows decision time growth with the dimensionality D. Tests are done on a single GPU. The number of planning steps for (b) is 15 and both models apply a beam search with a beam width of 64 and expansion factor of 4. (c) shows the relative performance between TAP and baselines when dealing with tasks with increasing raw action dimensionalities.

