LEARNING OBJECT-LANGUAGE ALIGNMENTS FOR OPEN-VOCABULARY OBJECT DETECTION

Abstract

Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel openvocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS.

1. INTRODUCTION

In the past few years, significant breakthroughs of visual models have been witnessed on close-set recognition, where the object categories are pre-defined by the dataset (Johnson-Roberson et al., 2016; Sakaridis et al., 2018; Geiger et al., 2013; Yu et al., 2018) . Such achievements are not the final answer to artificial intelligence, since human intelligence can perceive the world in an open environment. This motivates the community moving towards the open-world setting, where visual recognition models are expected to recognize arbitrary novel categories in a zero-shot manner. Towards this goal, learning visual models from language supervisions becomes more and more popular due to their open attributes, rich semantics, various data sources and nearly-free annotation cost. Particularly, many recent image classification works (Yu et al., 2022; Yuan et al., 2021; Zhai et al., 2021; Jia et al., 2021; Radford et al., 2021; Zhou et al., 2021a; Rao et al., 2021; Huynh et al., 2021) successfully expand their vocabulary sizes by learning from a large set of image-text pairs and demonstrate very impressive zero-shot ability. Following the success in open-vocabulary image classification, a natural extension is to tackle the object detection task, i.e., open-vocabulary object detection (OVOD). Object detection is a fundamental problem in computer vision aiming to localize object instances in an image and classify their categories (Girshick, 2015; Ren et al., 2015; Lin et al., 2017; He et al., 2017; Cai & Vasconcelos, 2018) . Training an ordinary object detector relies on manually-annotated bounding boxes and categories for objects (Fig. 1 Left). Similarly, learning object detection from language supervision requires object-language annotations. Prior studies investigated using such annotations for visual grounding tasks (Krishna et al., 2017; Yu et al., 2016; Plummer et al., 2015; Zhuang et al., 2018) . Most of the existing open-vocabulary object detection works (Li et al., 2021; Kamath et al., 2021; Cai et al., 2022) depend entirely or partially on the grounding annotations, which however is not scalable because such annotations are even more costly than annotating object detection data. To reduce the annotation cost for open-vocabulary object detection, a handful of In this paper, we propose a simple yet effective end-to-end vision-and-language framework for openvocabulary object detection, termed VLDet, which directly trains an object detector from imagetext pairs without relying on expensive grounding annotations or distilling classification-oriented vision models. Our key insight is that extracting region-word pairs from image-text pairs can be formulated as a set matching problem (Carion et al., 2020; Sun et al., 2021b; a; Fang et al., 2021) , which can be effectively solved by finding a bipartite matching (Kuhn, 1955) between regions and words with minimal global matching cost. Specifically, we treat image region features as a set and word embedding as another set with the the dot-product similarity as region-word alignment score. To find the lowest cost, the optimal bipartite matching will force each image region to be aligned with its corresponding word under the global supervision of the image-text pair. By replacing the classification loss in object detection with the optimal region-word alignment loss, our approach can help match each image region to the corresponding word and accomplish the object detection task. We conduct extensive experiments on the open-vocabulary setting, in which an object detection dataset with localized object annotations is reserved for base categories, while the dataset of imagetext pairs is used for novel categories. The results show that our method significantly improves the performance of detecting novel categories. On the open-vocabulary dataset COCO, our method outperforms the SOTA model PB-OVD (Zhou et al., 2022) by 1.2% mAP in terms of detecting novel classes using COCO Caption data (Chen et al., 2015) . On the open-vocabulary dataset LVIS, our method surpasses the SOTA method DetPro (Du et al., 2022) by 1.9% mAP mask in terms of detecting novel classes using CC3M (Sharma et al., 2018) dataset. The contributions of this paper are summarized as follows: 1) We introduce an open-vocabulary object detector method to learn object-language alignments directly from image-text pair data. 2) We propose to formulate region-word alignments as a set-matching problem and solve it efficiently with the Hungarian algorithm (Kuhn, 1955) . 3) Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance, particularly on detecting novel categories.

2. RELATED WORK

Weakly-Supervised Object Detection (WSOD). WSOD aims to train an object detector using image-level labels as supervision without any bounding box annotation. Multi-instance learning (Dietterich et al., 1997) 



Figure 1: Left: conventional object detection. Right: our proposed open-vocabulary object detection, where we focus on using the corpus of image-text pairs to learn region-word alignments for object detection. By converting the image into a set of regions and the caption into a set of words, the region-word alignments can be solved as a set-matching problem. In this way, our method is able to directly train the object detector with image-text pairs covering a large variety of object categories.recent studies(Du et al., 2022; Gu et al., 2021)  distill visual region features from classificationoriented models by cropping images. However, their performance is limited by the pre-trained models, trained based on global image-text matching instead of region-word matching.

is a well studied strategy for solving this problem(Bilen et al., 2015; Bilen &  Vedaldi, 2016; Cinbis et al., 2016). It learns each category label assignment based on the top-scoring strategy, ı.e. assigning the top-scoring proposal to the corresponding image label. As no bounding box supervision in training, OICR(Tang et al., 2017)  proposes to refine the instance classifier online using spatial relation, thereby improving the localization quality. Cap2Det(Ye et al., 2019)   further utilizes a text classifier to convert captions into image labels. Based on the student-teacher framework, Omni-DETR(Wang et al., 2022)  uses different types of weak labels to generate accurate pseudo labels through a bipartite matching as filtering mechanism. Different from WSOD, OVOD

availability

https://github.com/clin1223/VLDet.

