POLICY PRE-TRAINING FOR AUTONOMOUS DRIVING VIA SELF-SUPERVISED GEOMETRIC MODELING

Abstract

Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. As a side product, the pre-trained geometric modeling networks could bring further improvement to the depth and odometry estimation tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data.

1. INTRODUCTION

Policy learning refers to the learning process of an autonomous agent acquiring the decision-making policy to perform a certain task in a particular environment. Visuomotor policy learning (Mnih et al., 2015; Levine et al., 2016; Hessel et al., 2018; Laskin et al., 2020; Toromanoff et al., 2020) takes as input raw sensor observations and predicts the action, simultaneously cooperating and training the perception and control modules in an end-to-end fashion. For visuomotor policy models, learning tabula rasa is difficult, where it usually requires a prohibitively large corpus of labeled data or environment interactions to achieve satisfactory performance (Espeholt et al., 2018; Wijmans et al., 2019; Yarats et al., 2020) . To mitigate the sample efficiency caveat in visuomotor policy learning, pre-training the visual perception network in advance is a promising solution. Recent studies (Shah & Kumar, 2021; Parisi et al., 2022; Xie et al., 2022; Xue et al., 2022) , and language-vision pre-training (Radford et al., 2021) , could guarantee superior representation for robotic policy learning tasks, e.g., dexterous manipulation, motor control skills and visual navigation. However, for one crucial and challenging visuomotor task in particular, namely end-to-end autonomous drivingfoot_0 , the aforementioned predominant pre-training methods may not be the optimal choice (Yamada et al., 2022; Zhang et al., 2022b) . In this paper, we aim to investigate why ever-victorious pre-training approaches for general computer vision tasks and robotic control tasks are prone to fail in case of end-to-end autonomous driving. For conventional pre-training methods in general vision tasks, e.g., classification, segmentation and detection, they usually adopt a wide range of data augmentations to achieve translation and view invariance (Zhang et al., 2016; Wu et al., 2018) . For robotic control tasks, the input sequence is generally of small resolution; the environment setting is simple and concentrated on objects (Parisi et al., 2022; Radosavovic et al., 2022) . We argue that the visuomotor driving investigated in this paper, is sensitive to geometric relationships and usually comprises complex scenarios. As described in Fig. 1 (a), the input data often carry irrelevant information, such as background buildings, far-away moving vehicles, nearby static obstacles, etc., which are deemed as noises for the decision making task. To obtain a good driving policy, we argue that the desirable model should only concentrate on particular parts/patterns of the visual input. That is, taking heed of direct or deterministic relation to the decision making, e.g., traffic signals in Fig. 1(b ). However, concurrent pre-training approaches fail to fulfill such a requirement. There comes a natural and necessary demand to formulate a pre-training scheme curated for end-to-end autonomous driving. We attempt to pre-train a visual encoder with a massive amount of driving data crawled freely from the web, such that given limited labeled data, downstream applications could generalize well and quickly adapt to various driving environments as depicted in Fig. 1(c) . The pivotal question is how to introduce driving-decision awareness into the pre-training process to help the visual encoder concentrate on crucial visual cues for driving policy. One may resort to directly predicting ego-motion based on single frame sensor input, constraining the network on learning policy-related features. Previous literature tackles the supervision problem with pseudo labeling training on either an open dataset (Zhang et al., 2022b) or the target domain data (Zhang et al., 2022a) . However, pseudo labeling approaches suffer from noisy predictions from poorly calibrated models -this is true especially when there exists distinct domain gap such as geographical locations and traffic complexities (Rizve et al., 2020) . To address the bottleneck aforementioned, we propose PPGeo (Policy Pre-training via Geometric modeling), a fully self-supervised driving policy pre-training framework to learn from unlabeled and uncalibrated driving videos. It models the 3D geometric scene by jointly predicting ego-motion, depth, and camera intrinsics. Since directly learning ego-motion based on single frame input along with depth and intrinsics training from scratch is too difficult, it is necessary to separate the visual encoder pre-training from depth and intrinsics learning in two stages. In the first stage, the ego-motion



We use end-to-end autonomous driving and visuomotor autonomous driving interchangeably in this paper.



Figure 1: Uniqueness of visuomotor driving policy learning. The planned trajectory is shown as red points. (a) static obstacles and background buildings (objects in yellow rectangles) are irrelevant to the driving decision; (b) the traffic signal in the visual input (marked with the green box) is extremely difficult to recognize and yet deterministic for control outputs; (c) the pre-trained visual encoder has to be robust to different light and weather conditions. Photo credit from (Caesar et al., 2020).

funding

* Hongyang Li is the correspondence author. This work was in part supported by NSFC (62206172, 62222607), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), and Shanghai Committee of Science and Technology (21DZ1100100).

