LEARNING OBJECT-LANGUAGE ALIGNMENTS FOR OPEN-VOCABULARY OBJECT DETECTION

Abstract

Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel openvocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS.

1. INTRODUCTION

In the past few years, significant breakthroughs of visual models have been witnessed on close-set recognition, where the object categories are pre-defined by the dataset (Johnson-Roberson et al., 2016; Sakaridis et al., 2018; Geiger et al., 2013; Yu et al., 2018) . Such achievements are not the final answer to artificial intelligence, since human intelligence can perceive the world in an open environment. This motivates the community moving towards the open-world setting, where visual recognition models are expected to recognize arbitrary novel categories in a zero-shot manner. Towards this goal, learning visual models from language supervisions becomes more and more popular due to their open attributes, rich semantics, various data sources and nearly-free annotation cost. Particularly, many recent image classification works (Yu et al., 2022; Yuan et al., 2021; Zhai et al., 2021; Jia et al., 2021; Radford et al., 2021; Zhou et al., 2021a; Rao et al., 2021; Huynh et al., 2021) successfully expand their vocabulary sizes by learning from a large set of image-text pairs and demonstrate very impressive zero-shot ability. Following the success in open-vocabulary image classification, a natural extension is to tackle the object detection task, i.e., open-vocabulary object detection (OVOD). Object detection is a fundamental problem in computer vision aiming to localize object instances in an image and classify their categories (Girshick, 2015; Ren et al., 2015; Lin et al., 2017; He et al., 2017; Cai & Vasconcelos, 2018) . Training an ordinary object detector relies on manually-annotated bounding boxes and categories for objects (Fig. 1 Left). Similarly, learning object detection from language supervision requires object-language annotations. Prior studies investigated using such annotations for visual grounding tasks (Krishna et al., 2017; Yu et al., 2016; Plummer et al., 2015; Zhuang et al., 2018) . Most of the existing open-vocabulary object detection works (Li et al., 2021; Kamath et al., 2021; Cai et al., 2022) depend entirely or partially on the grounding annotations, which however is not scalable because such annotations are even more costly than annotating object detection data. To reduce the annotation cost for open-vocabulary object detection, a handful of

availability

https://github.com/clin1223/VLDet.

