H2RBOX: HORIZONTAL BOX ANNOTATION IS ALL YOU NEED FOR ORIENTED OBJECT DETECTION

Abstract

Oriented object detection emerges in many applications from aerial images to autonomous driving, while many existing detection benchmarks are annotated with horizontal bounding box only which is also less costive than fine-grained rotated box, leading to a gap between the readily available training corpus and the rising demand for oriented object detection. This paper proposes a simple yet effective oriented object detection approach called H2RBox merely using horizontal box annotation for weakly-supervised training, which closes the above gap and shows competitive performance even against those trained with rotated boxes. The cores of our method are weakly-and self-supervised learning, which predicts the angle of the object by learning the consistency of two different views. To our best knowledge, H2RBox is the first horizontal box annotation-based oriented object detector. Compared to an alternative i.e. horizontal box-supervised instance segmentation with our post adaption to oriented object detection, our approach is not susceptible to the prediction quality of mask and can perform more robustly in complex scenes containing a large number of dense objects and outliers. Experimental results show that H2RBox has significant performance and speed advantages over horizontal box-supervised instance segmentation methods, as well as lower memory requirements. While compared to rotated box-supervised oriented object detectors, our method shows very close performance and speed. The source code is available at PyTorch-based MMRotate and Jittor-based JDet.

1. INTRODUCTION

In addition to the relatively matured area of horizontal object detection (Liu et al., 2020) , oriented object detection has received extensive attention, especially for complex scenes, whereby fine-grained bounding box (e.g. rotated/quadrilateral bounding box) is needed, e.g. aerial images (Ding et al., 2019; Yang et al., 2019a) , scene text (Zhou et al., 2017 ), retail scenes (Pan et al., 2020) etc. Despite the increasing popularity of oriented object detection, many existing datasets are annotated with horizontal boxes (HBox) which may not be compatible (at least on the surface) for training an oriented detector. Hence labor-intensive re-annotation 1 have been performed on existing horizontalannotated datasets. For example, DIOR-R (Cheng et al., 2022) and SKU110K-R (Pan et al., 2020) are rotated box (RBox) annotations of the aerial image dataset DIOR (192K instances) (Li et al., 2020) and the retail scene SKU110K (1,733K instances) (Goldman et al., 2019) , respectively. One attractive question arises that if one can achieve weakly supervised learning for oriented object detection by only using (the more readily available) HBox annotations than RBox ones. One poten- In this paper, we propose a simple yet effective approach, dubbed as HBox-to-RBox (H2RBox), which achieves close performance to those RBox annotation supervised methods e.g. (Han et al., 2021b; Yang et al., 2023a) by only using HBox annotations, and even outperforms in considerable amount of cases as shown in our experiments. The cores of our method are weakly-and selfsupervised learning, which predicts the angle of the object by learning the enforced consistency between two different views. Specifically, we predict five offsets in the regression sub-network based on FCOS (Tian et al., 2019) in the WS branch (see Fig. 2 left) so that the final decoded outputs are RBoxes. Since we only have horizontal box annotations, we use the horizontal circumscribed rectangle of the predicted RBox when computing the regression loss. Ideally, predicted RBoxes and corresponding ground truth (GT) RBoxes (unlabeled) have highly overlapping horizontal circumscribed rectangles. In the SS branch (see Fig. 2 right), we rotate the input image by a randomly angle and predict the corresponding RBox through a regression sub-network. Then, the consistency of RBoxes between the two branches, including scale consistency and spatial location consistency, are learned to eliminate the undesired cases to ensure the reliability of the WS branch. Our main contributions are as follows: 1) To our best knowledge, we propose the first HBox annotation-based oriented object detector. Specifically, a weakly-and self-supervised angle learning paradigm is devised which closes the gap between HBox training and RBox testing, and it can serve as a plugin for existing detectors. 2) We prove through geometric equations that the predicted RBox is the correct GT RBox under our designed pipeline and consistency loss, and does not rely on not-fully-verified/ad-hoc assumptions, e.g. color-pairwise affinity in BoxInst or additional intermediate results whose quality cannot be ensured, e.g. feature map used by many weakly supervised methods (Wang et al., 2022) . 3) Compared with the potential alternatives e.g. HBox-Mask-RBox whose instance segmentation part is fulfilled by the state-of-the-art BoxInst, our H2RBox outperforms by about 14% mAP



Figure 1: Visual comparison of three HBox-supervised rotated detectors on aircraft detection (Wei et al., 2020), ship detection (Yang et al., 2018), vehicle detection (Azimi et al., 2021), etc. The HBox-Mask-RBox style methods, i.e. BoxInst-RBox (Tian et al., 2021) and BoxLevelSet-RBox (Li et al., 2022b), perform not well in complex and object-cluttered scenes. tial and verified technique in our experiments is HBox-supervised instance segmentation, concerning with BoxInst (Tian et al., 2021), BoxLevelSet (Li et al., 2022b), etc. Based on the segmentation mask by these methods, one can readily obtain the final RBox by finding its minimum circumscribed rectangle, and we term the above procedure as HBox-Mask-RBox style methods i.e. BoxInst-RBox and BoxLevelSet-RBox in this paper. Yet it in fact involves a potentially more challenging task i.e.instance segmentation whose quality can be sensitive to the background noise, and it can influence heavily on the subsequent RBox detection step, especially given complex scenes (in Fig.1(a)) and the objects are crowded (in Fig.1(b)). Also, involving segmentation is often more computational costive and the whole procedure can be time consuming (see Tab. 1-2).

funding

* Correspondence author is Junchi Yan. The work was in part supported by National Key Research and Development Program of China (2020AAA0107600), National Natural Science Foundation of China (62222607), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).1 The annotation cost (in price) of the RBox is about 36.5% ($86 vs. $63) higher than that of the HBox according to https://cloud.google.com/ai-platform/data-labeling/pricing.

