DECISION TRANSFORMER UNDER RANDOM FRAME DROPPING

Abstract

Controlling agents remotely with deep reinforcement learning (DRL) in the real world is yet to come. One crucial stepping stone is to devise RL algorithms that are robust in the face of dropped information from corrupted communication or malfunctioning sensors. Typical RL methods usually require considerable online interaction data that are costly and unsafe to collect in the real world. Furthermore, when applying to the frame dropping scenarios, they perform unsatisfactorily even with moderate drop rates. To address these issues, we propose Decision Transformer under Random Frame Dropping (DeFog), an offline RL algorithm that enables agents to act robustly in frame dropping scenarios without online interaction. DeFog first randomly masks out data in the offline datasets and explicitly adds the time span of frame dropping as inputs. After that, a finetuning stage on the same offline dataset with a higher mask rate would further boost the performance. Empirical results show that DeFog outperforms strong baselines under severe frame drop rates like 90%, while maintaining similar returns under non-frame-dropping conditions in the regular MuJoCo control benchmarks and the Atari environments. Our approach offers a robust and deployable solution for controlling agents in real-world environments with limited or unreliable data.

1. INTRODUCTION

Imagine you are piloting a drone on a mission to survey a remote forest. Suddenly, the images transmitted from the drone become heavily delayed or even disappear temporarily due to poor communication. An experienced pilot would use their skill to stabilize the drone based on the last received frame until communication is restored. In this paper, we aim to empower deep reinforcement learning (RL) algorithms with such abilities to control remote agents. In many real world control tasks, the decision makers are separate from the action executor (Saha & Dasgupta, 2018) , which introduce the risk of packet loss and delay during network communication. Furthermore, sensors such as cameras and IMUs are sometimes prone to temporary malfunctioning, or limited by hardware restrictions, thus causing the observation to be unavailable at certain timesteps (Dulac-Arnold et al., 2021) . These examples lead to the core challenge of devising the desired algorithm: controlling the agents against frame dropping, i.e., a temporary loss of observations as well as other information.



Figure 1: RL performance in the Hopper-v3 environment under different frame drop rates.

