PROPOSAL-CONTRASTIVE PRETRAINING FOR OBJECT DETECTION FROM FEWER DATA

Abstract

The use of pretrained deep neural networks represents an attractive way to achieve strong results with few data available. When specialized in dense problems such as object detection, learning local rather than global information in images has proven to be more efficient. However, for unsupervised pretraining, the popular contrastive learning requires a large batch size and, therefore, a lot of resources. To address this problem, we are interested in transformer-based object detectors that have recently gained traction in the community with good performance and with the particularity of generating many diverse object proposals. In this work, we present Proposal Selection Contrast (ProSeCo), a novel unsupervised overall pretraining approach that leverages this property. ProSeCo uses the large number of object proposals generated by the detector for contrastive learning, which allows the use of a smaller batch size, combined with object-level features to learn local information in the images. To improve the effectiveness of the contrastive loss, we introduce the object location information in the selection of positive examples to take into account multiple overlapping object proposals. When reusing pretrained backbone, we advocate for consistency in learning local information between the backbone and the detection head. We show that our method outperforms state of the art in unsupervised pretraining for object detection on standard and novel benchmarks in learning with fewer data.

1. INTRODUCTION

In recent years, we have seen a surge in research on unsupervised pretraining. For some popular tasks such as Image Classification or Object detection, initializing with a pretrained backbone helps train big architectures more efficiently (Chen et al., 2020b; Caron et al., 2020; He et al., 2020) . While gathering data is not difficult in most cases, its labeling is always time-consuming and costly. Pretraining leverages huge amounts of unlabeled data and helps achieve better performance with fewer data and fewer iterations, when finetuning the pretrained models afterwards. The design of pretraining tasks for dense problems such as Object Detection has to take into account the fine-grained information in the image. Furthermore, complex object detectors contain different specific parts that can be either pretrained independently (Xiao et al., 2021; Xie et al., 2021; Wang et al., 2021a; Hénaff et al., 2021; Dai et al., 2021b; Bar et al., 2022) or jointly (Wei et al., 2021) . The different pretraining possibilities for Object Detection in the literature are illustrated in Figure 1 . A pretraining of the backbone tailored to dense tasks has been the subject of many recent efforts (Xiao et al., 2021; Xie et al., 2021; Wang et al., 2021a; Hénaff et al., 2021 ) (Backbone Pretraining), but few have been interested in incorporating the detection-specific components of the architectures during pretraining (Dai et al., 2021b; Bar et al., 2022; Wei et al., 2021) (Overall Pretraining). Among them, SoCo (Wei et al., 2021) focuses on convolutional architectures and pretrains the whole detector, i.e. the backbone along with the detection heads (approach e. in Figure 1 ), whereas UP-DETR (Dai et al., 2021b) and DETReg (Bar et al., 2022) pretrain only the transformers (Vaswani et al., 2017) in transformer-based object detectors (Carion et al., 2020; Zhu et al., 2021) and keep the backbone fixed (approach c. in Figure 1 ). Due to the numerous parameters that must be learned and the huge number of iterations needed because of random initialization, pretraining the entire detection model is expensive (Figure 1 , e.). On the other hand, pretraining only the detection-specific parts with a fixed backbone leads to fewer parameters and allows leveraging strong pretrained backbones already available. However, fully relying on aligning embeddings given by the fixed backbone during pretraining and those given by the detection head, as done in DETReg or UP-DETR, introduces a discrepancy in the information contained in the features (Figure 1 , c.). Indeed, while the pretrained backbone has been trained to learn image-level features, the object detector must understand objectlevel information in the image. Aligning inconsistent features hinders the pretraining quality. In this work, we propose Proposal Selection Contrast (ProSeCo), an unsupervised pretraining method using transformer-based detectors with a fixed pretrained backbone. ProSeCo makes use of two models. The first one aims to alleviate the discrepancy in the features by maintaining a copy of the whole detection model. This model is referred to as a teacher in charge of the object proposals embeddings, and is updated through an Exponential Moving Average (EMA) of another student network making the object predictions and using a similar architecture. This latter network is trained by a contrastive learning approach leveraging the high number of object proposals that can be obtained from the detectors. This methodology, in addition to the absence of batch normalization in the architectures, reduces the need for a large batch size. We further adapt the contrastive loss commonly used in pretraining to take into account the locations of the object proposals in the image, which is crucial in object detection. In addition, the localization task is independently learned through region proposals generated by Selective Search (Uijlings et al., 2013) . Our contributions are summarized as: • We propose Proposal Selection Contrast (ProSeCo), a contrastive learning method tailored for pretraining transformer-based object detectors. • We introduce the information of the localization of object proposals for the selection of positive examples in the contrastive loss to improve its efficiency for pretraining. • We show that our proposed ProSeCo outperforms previous pretraining methods for transformer-based object detectors on standard benchmarks as well as novel benchmarks.

2. RELATED WORK

Supervised Object Detection with transformer-based architectures Object Detection is an important and extensively researched problem in computer vision (Girshick et al., 2014; Girshick, 



Figure 1: Illustration of the different pretraining possibilities for Object Detection. The pretraining can be either limited to the backbone (left), or overall including the detection heads (right). The few previous overall approaches either suffer from a discrepancy in the features between the backbone, that is trained at the image-level (global), and the detection heads, trained to encode object-level (local) information (c.), or from the cost of training lots of parameters with a large batch size (e.).

