BEVDISTILL: CROSS-MODAL BEV DISTILLATION FOR MULTI-VIEW 3D OBJECT DETECTION

Abstract

3D object detection from multiple image views is a fundamental and challenging task for visual scene understanding. Owing to its low cost and high efficiency, multi-view 3D object detection has demonstrated promising application prospects. However, accurately detecting objects through perspective views is extremely difficult due to the lack of depth information. Current approaches tend to adopt heavy backbones for image encoders, making them inapplicable for real-world deployment. Different from the images, LiDAR points are superior in providing spatial cues, resulting in highly precise localization. In this paper, we explore the incorporation of LiDAR-based detectors for multi-view 3D object detection. Instead of directly training a depth prediction network, we unify the image and LiDAR features in the Bird-Eye-View (BEV) space and adaptively transfer knowledge across non-homogenous representations in a teacher-student paradigm. To this end, we propose BEVDistill, a cross-modal BEV knowledge distillation (KD) framework for multi-view 3D object detection. Extensive experiments demonstrate that the proposed method outperforms current KD approaches on a highly-competitive baseline, BEVFormer, without introducing any extra cost in the inference phase. Notably, our best model achieves 59.4 NDS on the nuScenes test leaderboard, achieving new state-of-the-arts in comparison with various image-based detectors.

1. INTRODUCTION

3D object detection, aiming at localizing objects in the 3D space, is a crucial ingredient for 3D scene understanding. It has been widely adopted in various applications, such as autonomous driving (Chen et al., 2022a; Shi et al., 2020; Wang et al., 2021b) , robotic navigation (Antonello et al., 2017) , and virtual reality (Schuemie et al., 2001) . Recently, multi-view 3D object detection has drawn great attention thanks to its low cost and high efficiency. As images offer a discriminative appearance and rich texture with dense pixels, detectors can easily discover and categorize the objects, even in a far distance. Despite the promising deployment advantage, accurately localizing instances from camera view only is extremely difficult, mainly due to the ill-posed nature of monocular imagery. Therefore, recent approaches adopt heavy backbones (e.g., ResNet-101-DCN (He et al., 2016 ), VoVNetV2 (Lee & Park, 2020) ) for image feature extraction, making it inapplicable for real-world applications. LiDAR points, which capture precise 3D spatial information, provide natural guidance for camerabased object detection. In light of this, recent works (Guo et al., 2021b; Peng et al., 2022) start to explore the incorporation of point clouds in 3D object detection for performance improvement. One line of work (Wang et al., 2019b) projects each points to the image to form depth map labels, and subsequently trains a depth estimator to explicitly extract the spatial information. Such a paradigm generates intermediate products, i.e., depth prediction maps, therefore introducing extra computational cost. Another line of work (Chong et al., 2021) is to leverage the teacher-student paradigm for knowledge transfer. (Chong et al., 2021) projects the LiDAR points to the image plane, constructing a 2D input for the teacher model. Since the student and teacher models are exactly structurally identical, feature mimicking can be naturally conducted under the framework. Although it solves the alignment issue across different modalities, it misses the opportunity of pursuing a strong LiDARbased teacher, which is indeed important in the knowledge distillation (KD) paradigm. Recently, UVTR (Li et al., 2022a) propose to distill the cross-modal knowledge in the voxel space, while maintaining the structure of respective detectors. However, it directly forces the 2D branch to imitate the 3D features, ignoring the divergence between different modalities. In this work, by carefully examining the non-homogenous features represented in the 2D and 3D spaces, we explore the incorporation of knowledge distillation for multi-view 3D object detection. Yet, there are two technical challenges. First, the views of images and LiDAR points are different, i.e., camera features are in the perspective view, while LiDAR features are in the 3D/bird's-eye view. Such a view discrepancy indicates that a natural one-to-one imitation (Romero et al., 2014) may not be suitable. Second, RGB images and point clouds hold respective representations in their own modalities. Therefore, it can be suboptimal to mimic features directly, which is commonly adopted in the 2D detection paradigm (Yang et al., 2022a; Zhang & Ma, 2020) . We address the above challenges by designing a cross-modal BEV knowledge distillation framework, namely BEVDistill. Instead of constructing a separate depth estimation network or explicitly projecting one view into the other one, we convert all features to the BEV space, maintaining both geometric structure and semantic information. Through the shared BEV representation, features from different modalities are naturally aligned without much information loss. After that, we adaptively transfer the spatial knowledge through both dense and sparse supervision: (i) we introduce soft foreground-guided distillation for non-homogenous dense feature imitation, and (ii) a sparse style instance-wise distillation paradigm is proposed to selectively supervise the student by maximizing the mutual information. Experimental results on the competitive nuScenes dataset demonstrate the superiority and generalization of our BEVDistill. For instance, we achieve 3.4 NDS and 2.7 NDS improvements on a competitive multi-view 3D detector, BEVFormer (Li et al., 2022c) , under the single-frame and multi-frame settings, respectively. Besides, we also present extensive experiments on lightweight backbones, as well as detailed ablation studies to validate the effectiveness of our method. Notably, our best model reaches 59.4 NDS on nuScenes test leaderboard, achieving new state-of-the-art results among all the published multi-view 3D detectors.

2.1. VISION-BASED 3D OBJECT DETECTION

Vision-based 3D object detection aims to detect object locations, scales, and rotations, which is of great importance in autonomous driving (Xie et al., 2022; Jiang et al., 2022; Zhang et al., 2022) and augmented reality (Azuma, 1997) . One line of work is to detect 3D boxes directly from single images. Mono3D (Chen et al., 2016) utilizes traditional methods to lift 2D objects into 3D space with semantic and geometrical information. In the consideration that objects located at different distances appear on various scales, D4LCN (Ding et al., 2020) (Philion & Fidler, 2020) to explicitly project images into the BEV space, followed by a traditional 3D detection head. Inspired by the recently developed attention mechanism, BEVFormer (Li et al., 2022c) automates the cam2bev process with



proposes to leverage depth prediction for convolutional kernel learning. Recently, FCOS3D (Wang et al., 2021a) extends the classical 2D paradigm FCOS (Tian et al., 2019b) into monocular 3D object detection. It transforms the regression targets into the image domain by predicting its 2D-based attributes. Further, PGD (Wang et al., 2022b) introduces relational graphs to improve the depth estimation for object localization. MonoFlex (Zhang et al., 2021) argues that objects located at different positions should not be treated equally and presents auto-adjusted supervision. Another line of work predicts objects from multi-view images. DETR3D (Wang et al., 2022c) first incorporates DETR (Carion et al., 2020) into 3D detection by introducing a novel concept: 3D reference points. After that, Graph-DETR3D (Chen et al., 2022b) extends it by enriching the feature representations with dynamic graph feature aggregation. Different from the above methods, BEVDet (Huang et al., 2021) leverages the Lift-splat-shoot

availability

//github.com/zehuichen123/BEVDistill.

