CO3: COOPERATIVE UNSUPERVISED 3D REPRESENTA-TION LEARNING FOR AUTONOMOUS DRIVING

Abstract

Unsupervised contrastive learning for indoor-scene point clouds has achieved great successes. However, unsupervised representation learning on outdoor-scene point clouds remains challenging because previous methods need to reconstruct the whole scene and capture partial views for the contrastive objective. This is infeasible in outdoor scenes with moving objects, obstacles, and sensors. In this paper, we propose CO3, namely Cooperative Contrastive Learning and Contextual Shape Prediction, to learn 3D representation for outdoor-scene point clouds in an unsupervised manner. CO3 has several merits compared to existing methods. (1) It utilizes LiDAR point clouds from vehicle-side and infrastructure-side to build views that differ enough but meanwhile maintain common semantic information for contrastive learning, which are more appropriate than views built by previous methods. (2) Alongside the contrastive objective, we propose contextual shape prediction to bring more task-relevant information for unsupervised 3D point cloud representation learning and we also provide a theoretical analysis for this pretraining goal. (3) As compared to previous methods, representation learned by CO3 is able to be transferred to different outdoor scene datasets collected by different type of LiDAR sensors. (4) CO3 improves current state-of-the-art methods on both Once, KITTI and NuScenes datasets by up to 2.58 mAP in 3D object detection task and 3.54 mIoU in LiDAR semantic segmentation task. Codes and models will be released here. We believe CO3 will facilitate understanding LiDAR point clouds in outdoor scene.

1. INTRODUCTION

LiDAR is an important sensor for autonomous driving in outdoor environments and both of the machine learning and computer vision communities have shown strong interest on perception tasks on LiDAR point clouds, including 3D object detection, segmentation and tracking. Up to now, randomly initializing and directly training from scratch on detailed annotated data still dominates this field. On the contrary, recent research efforts (He et al., 2020; Tian et al., 2019; Caron et al., 2020; Grill et al., 2020; Wang et al., 2021) in image domain focus on unsupervised representation learning with contrastive objective on different views built from different augmentation of images. They pre-train the 2D backbone with a large-scale dataset like ImageNet (Deng et al., 2009) in an unsupervised manner and use the pre-trained backbone to initialize downstream neural networks on different datasets and tasks, which achieve significant performance improvement in 2D object detection and semantic segmentation (Girshick et al., 2014; Lin et al., 2017; Ren et al., 2015) . Inspired by these However, outdoor scenes are dynamic and large-scale, making it impossible to reconstruct the whole scenes for building views. Thus, methods in (Xie et al., 2020; Hou et al., 2021; Liu et al., 2020) cannot be directly transferred but there exists two possible alternatives to build views. The first idea, embraced by (Liang et al., 2021; Yin et al., 2022) , is to apply data augmentation to single frame of point cloud and treat the original and augmented versions as different views, which are indicated by the first and second pictures in Fig. 1 (b) . However, all the augmentation of point clouds, including random drop, rotation and scaling, can be implemented in a linear transformation and views constructed in this way do not differ enough. The second idea is to consider point clouds at different timestamps as different views, represented by (Huang et al., 2021 ). Yet the moving objects would make it hard to find correct correspondence for contrastive learning. See the first and third pictures in Fig. 1 (b) , while the autonomous vehicle is waiting at the crossing, other cars and pedestrians are moving around. The autonomous vehicle has no idea about how they move and is not able to find correct correspondence (common semantics). Due to these limitations, it is still challenging when transferring the pre-trained 3D encoders to datasets collected by different LiDAR sensors. Could we find better views to learn general representations for outdoor-scene LiDAR point clouds? In this paper, we propose COoperative COntrastive Learning and COntextual Shape Prediction, namely CO3, to explore the potential of utilizing vehicle-infrastructure cooperation dataset for building adequate views in unsupervised 3D representation learning. As shown in (c) in Fig. 1 , a recently released infrastructure-vehicle-cooperation dataset called DAIR-V2X (Yu et al., 2022) is utilized to learn general 3D representations. Point clouds from both vehicle and infrastructure sides are captured at the same timestamp thus views share adequate common semantic. Meanwhile infrastructure-side and vehicle-side point clouds differ a lot. These properties make views constructed in this way appropriate in contrastive learning. Besides, as proposed in (Wang et al., 2022) , representations learned by pure contrastive learning lack task-relevant information. Thus we further add a pre-training goal



Figure 1: Example views built by different methods in contrastive learning, including (a) previous indoor-scene methods (b) previous outdoor-scene methods and (c) the proposed CO3. Compared to previous methods, CO3 can build two views that differ a lot and share adequate common semantics.

