DECISION TRANSFORMER UNDER RANDOM FRAME DROPPING

Abstract

Controlling agents remotely with deep reinforcement learning (DRL) in the real world is yet to come. One crucial stepping stone is to devise RL algorithms that are robust in the face of dropped information from corrupted communication or malfunctioning sensors. Typical RL methods usually require considerable online interaction data that are costly and unsafe to collect in the real world. Furthermore, when applying to the frame dropping scenarios, they perform unsatisfactorily even with moderate drop rates. To address these issues, we propose Decision Transformer under Random Frame Dropping (DeFog), an offline RL algorithm that enables agents to act robustly in frame dropping scenarios without online interaction. DeFog first randomly masks out data in the offline datasets and explicitly adds the time span of frame dropping as inputs. After that, a finetuning stage on the same offline dataset with a higher mask rate would further boost the performance. Empirical results show that DeFog outperforms strong baselines under severe frame drop rates like 90%, while maintaining similar returns under non-frame-dropping conditions in the regular MuJoCo control benchmarks and the Atari environments. Our approach offers a robust and deployable solution for controlling agents in real-world environments with limited or unreliable data.

1. INTRODUCTION

Imagine you are piloting a drone on a mission to survey a remote forest. Suddenly, the images transmitted from the drone become heavily delayed or even disappear temporarily due to poor communication. An experienced pilot would use their skill to stabilize the drone based on the last received frame until communication is restored. In this paper, we aim to empower deep reinforcement learning (RL) algorithms with such abilities to control remote agents. In many real world control tasks, the decision makers are separate from the action executor (Saha & Dasgupta, 2018) , which introduce the risk of packet loss and delay during network communication. Furthermore, sensors such as cameras and IMUs are sometimes prone to temporary malfunctioning, or limited by hardware restrictions, thus causing the observation to be unavailable at certain timesteps (Dulac-Arnold et al., 2021) . These examples lead to the core challenge of devising the desired algorithm: controlling the agents against frame dropping, i.e., a temporary loss of observations as well as other information. Figure 1 illustrates how a regular RL algorithm performs under different frame drop rates. Our findings indicate that RL agents trained in environments without frame dropping struggle to adapt to scenarios with high frame drop rates, highlighting the severity of this issue and the need to find a solution for it. This problem gradually attracts more attention recently: Nath et al. ( 2021) adapt vanilla DQN algorithm to a randomly delayed markov decision process; Bouteiller et al. (2020) propose a method that modifies the classic Soft Actor-Critic algorithm (Haarnoja et al., 2018) to handle observation and action delay scenarios. In contrast to the frame-delay setting in previous works, we try to solve a more challenging problem where the frames are permanently lost. Moreover, previous methods usually learn in an online frame dropping environment, which can be unsafe and costly. In this paper, we introduce Decision Transformer under Random Frame Dropping (DeFog), an offline reinforcement learning algorithm that is robust to frame drops. The algorithm uses a Decision Transformer architecture (Chen et al., 2021) to learn from randomly masked offline datasets, and includes an additional input that represents the duration of frame dropping. In continuous control tasks, DeFog can be further improved by finetuning its parameters, with the backbone of the Decision Transformer held fixed. We evaluate our method on continuous and discrete control tasks in MuJoCo and Atari game environments. In these environments, observations are dropped randomly before being sent to the agent. Empirical results show that DeFog significantly outperforms various baselines under frame dropping conditions, while maintaining performance that are comparable to the other offline RL methods in regular non-frame-dropping environments.

2.1. CONTROL UNDER FRAME DROPPING AND DELAY

The loss or delay of observation and control is an essential problem in remote control tasks (Balemi & Brunner, 1992; Funda & Paul, 1991) . In recent years, with the rise of cloud-edge computing systems, this problem has gained even more attention in various applications such as intelligent connected vehicles (Li et al., 2018) and UAV swarms (Bekkouche et al., 2018) . When reinforcement learning is applied to such remote control tasks, a robust RL algorithm is desired. Katsikopoulos & Engelbrecht (2003) first propose the formulation of the Random Delayed Markov Decision Process. Along with the new formulation, a method is proposed to augment the observation space with the past actions. However, previous methods (Walsh et al., 2009; Schuitema et al., 2010) usually stack the delayed observations together, which leads to an expanded observation space and requires a fixed delay duration as a hard threshold. Hester & Stone (2013) propose predicting delayed states with a random forest model, while Bouteiller et al. ( 2020) tackle random observation and action delays in a model-free manner by relabelling the past actions with the current policy to mitigate the off-policy problem. Nath et al. (2021) build upon the Deep Q-Network (DQN) and propose a state augmentation approach to learn an agent that can handle frame drops. However, these methods typically assume a maximum delay span and are trained in online settings. Recently, Imai et al. ( 2021) train a vision-guided quadrupedal robot to navigate in the wild against random observation delay by leveraging delay randomization. Our work shares the same intuition of the train-time frame masking approach, but we utilize a Decision Transformer backbone with a novel frame drop interval embedding and a performance-improving finetuning technique.

2.2. TRANSFORMERS IN REINFORCEMENT LEARNING

Researchers recently formulate the decision making procedure in offline reinforcement learning as a sequence modeling problem using transformer models (Chen et al., 2021; Janner et al., 2021) . In contrast to the policy gradient and temporal difference methods, these works advocate the paradigm of treating reinforcement learning as a supervised learning problem (Schmidhuber, 2019), directly predicting actions from the observation sequence and the task specification. The Decision Transformer model (Chen et al., 2021) takes the encoded reward-to-go, state, and action sequence as



Figure 1: RL performance in the Hopper-v3 environment under different frame drop rates.

