ADAPTIVE AUTOMOTIVE RADAR DATA ACQUISITION

Abstract

In an autonomous driving scenario, it is vital to acquire and efficiently process data from various sensors to obtain a complete and robust perspective of the surroundings. Many studies have shown the importance of having radar data in addition to images since radar improves object detection performance. We develop a novel algorithm motivated by the hypothesis that with a limited sampling budget, allocating more sampling budget to areas with the object as opposed to a uniform sampling budget ultimately improves relevant object detection and classification. In order to identify the areas with objects, we develop an algorithm to process the object detection results from the Faster R-CNN object detection algorithm and the previous radar frame and use these as prior information to adaptively allocate more bits to areas in the scene that may contain relevant objects. We use previous radar frame information to mitigate the potential information loss of an object missed by the image or the object detection network. Also, in our algorithm, the error of missing relevant information in the current frame due to the limited budget sampling of the previous radar frame did not propagate across frames. We also develop an end-to-end transformer-based 2D object detection network using the NuScenes radar and image data. Finally, we compare the performance of our algorithm against that of standard CS and adaptive CS using radar on the Oxford Radar RobotCar dataset.

1. INTRODUCTION

The intervention of deep learning and computer vision techniques for autonomous driving scenario is aiding in the development of robust and safe autonomous driving systems. Similar to humans navigating their world with numerous sensors and information, the autonomous driving systems need to process different sensor information efficiently to obtain the complete perspective of the environment to safely maneuver. Numerous studies Meyer & Kuschk (2019 ), Chang et al. (2020) have shown the importance of having radar data in addition to images for improved object detection performance. The real-time radar data acquisition using compressed sensing is a well-studied field where, even with sub-Nyquist sampling rates, the original data can be reconstructed accurately. During the onboard signal acquisition and processing, compressed sensing will reduce the required measurements, therefore, gaining speed and power savings. In adaptive block-based compressed sensing, based on prior information with a limited sampling budget, radar blocks with objects would be allocated more sampling resources while maintaining the overall sampling budget. This method would further enhance the quality of reconstructed data by focusing on the important regions. In our work, we split the radar into 8 azimuth blocks and used the 2D object detection results from images as prior data to choose the important regions. The 2D object detection network generates the bounding boxes and object classes for objects in the image. The bounding boxes were used to identify the azimuth of the object in radar coordinates. This helped in determining the important azimuth blocks. As a second step, we used both previous radar information and the 2-D object detection network to determine the important regions and dynamically allocate the sampling budget. The use of previous radar data in addition to object information from images mitigates the loss of object information either by image or the object detection network. 



, we have also developed an end-to-end transformer-based 2-D object detection Carion et al. (2020) network using the NuScenes Caesar et al. (2020) radar and image dataset. The object detection performance of the model using both Image and Radar data performed better than the object detection model trained only on the image data. 1

