CO3: COOPERATIVE UNSUPERVISED 3D REPRESENTA-TION LEARNING FOR AUTONOMOUS DRIVING

Abstract

Unsupervised contrastive learning for indoor-scene point clouds has achieved great successes. However, unsupervised representation learning on outdoor-scene point clouds remains challenging because previous methods need to reconstruct the whole scene and capture partial views for the contrastive objective. This is infeasible in outdoor scenes with moving objects, obstacles, and sensors. In this paper, we propose CO3, namely Cooperative Contrastive Learning and Contextual Shape Prediction, to learn 3D representation for outdoor-scene point clouds in an unsupervised manner. CO3 has several merits compared to existing methods. (1) It utilizes LiDAR point clouds from vehicle-side and infrastructure-side to build views that differ enough but meanwhile maintain common semantic information for contrastive learning, which are more appropriate than views built by previous methods. (2) Alongside the contrastive objective, we propose contextual shape prediction to bring more task-relevant information for unsupervised 3D point cloud representation learning and we also provide a theoretical analysis for this pretraining goal. (3) As compared to previous methods, representation learned by CO3 is able to be transferred to different outdoor scene datasets collected by different type of LiDAR sensors. (4) CO3 improves current state-of-the-art methods on both Once, KITTI and NuScenes datasets by up to 2.58 mAP in 3D object detection task and 3.54 mIoU in LiDAR semantic segmentation task. Codes and models will be released here. We believe CO3 will facilitate understanding LiDAR point clouds in outdoor scene.

1. INTRODUCTION

LiDAR is an important sensor for autonomous driving in outdoor environments and both of the machine learning and computer vision communities have shown strong interest on perception tasks on LiDAR point clouds, including 3D object detection, segmentation and tracking. Up to now, randomly initializing and directly training from scratch on detailed annotated data still dominates this field. On the contrary, recent research efforts (He et al., 2020; Tian et al., 2019; Caron et al., 2020; Grill et al., 2020; Wang et al., 2021) in image domain focus on unsupervised representation learning with contrastive objective on different views built from different augmentation of images. They pre-train the 2D backbone with a large-scale dataset like ImageNet (Deng et al., 2009) in an unsupervised manner and use the pre-trained backbone to initialize downstream neural networks on different datasets and tasks, which achieve significant performance improvement in 2D object detection and semantic segmentation (Girshick et al., 2014; Lin et al., 2017; Ren et al., 2015) . Inspired by these * corresponding authors are Wenqi Shao and Ping Luo 1

