LEVERAGING THE THIRD DIMENSION IN CONTRASTIVE LEARNING

Abstract

Self-Supervised Learning (SSL) methods operate on unlabeled data to learn robust representations useful for downstream tasks. Most SSL methods rely on augmentations obtained by transforming the 2D image pixel map. These augmentations ignore the fact that biological vision takes place in an immersive three-dimensional, temporally contiguous environment, and that low-level biological vision relies heavily on depth cues. Using a signal provided by a pretrained state-of-the-art monocular RGB-to-depth model (the Depth Prediction Transformer, Ranftl et al., 2021), we explore two distinct approaches to incorporating depth signals into the SSL framework. First, we evaluate contrastive learning using an RGB+depth input representation. Second, we use the depth signal to generate novel views from slightly different camera positions, thereby producing a 3D augmentation for contrastive learning. We evaluate these two approaches on three different SSL methods-BYOL, SimSiam, and SwAV-using ImageNette (10 class subset of ImageNet) and ImageNet-100. We find that both approaches to incorporating depth signals improve the robustness and generalization of the baseline SSL methods, though the first approach (with depth-channel concatenation) is superior. For instance, BYOL with the additional depth channel leads to an increase in downstream classification accuracy from 85.3% to 88.0% on ImageNette and 84.1% to 87.0% on ImageNet-C.

1. INTRODUCTION

Biological vision systems evolved in and interact with a three-dimensional world. As an individual moves through the environment, the relative distance of objects is indicated by rich signals extracted by the visual system, from motion parallax to binocular disparity to occlusion cues. These signals play a role in early development to bootstrap an infant's ability to perceive objects in visual scenes (Spelke, 1990; Spelke & Kinzler, 2007) and to reason about physical interactions between objects (Baillargeon, 2004) . In the mature visual system, features predictive of occlusion and three-dimensional structure are extracted early and in parallel in the visual processing stream (Enns & Rensink, 1990; 1991) , and early vision uses monocular cues to rapidly complete partially-occluded objects (Rensink & Enns, 1998) and binocular cues to guide attention (Nakayama & Silverman, 1986) . In short, biological vision systems are designed to leverage the three-dimensional structure of the environment. In contrast, machine vision systems typically consider a 2D RGB image or a sequence of 2D RGB frames to be the relevant signal. Depth is considered as the end product of vision, not a signal that can be exploited to improve visual information processing. Given the bias in favor of end-to-end models, researchers might suppose that if depth were a useful signal, an end-to-end computer vision system would infer depth. Indeed, it's easy to imagine the advantages of depth processing integrated into the visual information processing stream. For example, if foreground objects are segmented from the background scene, neural networks would not make the errors they often do by using short-cut features to classify (e.g., misclassifying a cow at the beach as a whale) (Geirhos et al., 2020) . In this work, we take seriously the insight from biological vision that depth signals are extracted early in the processing stream, and we explore how depth signals might support computer vision. We assume the availability of a depth signal by using an existing state-of-the-art monocular RGB-to-depth extraction model, the Dense Prediction Transformer (DPT) (Ranftl et al., 2021) . We focus on using the additional depth information for self-supervised representation learning. SSL aims to learn effective representations from unlabelled data that will be useful for downstream tasks (Chen et al., 2020a). We investigate two specific hypotheses. First, we consider directly appending the depth channel to the RGB and then use the RGB+D input directly in contrastive learning (Fig. 1 ). Second, we consider synthesizing novel image views from the RGB+D representation using a recent method, AdaMPI (Han et al., 2022) and treating these synthetic views as image augmentations for contrastive learning (Fig. 2 ). Prior work has explored the benefit of depth signals in supervised learning for specific tasks like object detection and semantic segmentation (Cao et al., 2016; Hoyer et al., 2021; Song et al., 2021; Seichter et al., 2021) . Here, we pursue a similar approach in contrastive learning, where the goal is to learn robust, universal representations that support downstream tasks. To the best of our knowledge, only one previous paper has explored the use of depth for contrastive learning (Tian et al., 2020) . In their case, ground truth depth was used and it was considered as one of many distinct "views" of the world. We summarize our contributions below: • Motivated by biological vision systems, we propose two distinct approaches to improving SSL using a (noisy) depth signal extracted from a monocular RGB image. First, we concatenate the derived depth map and the image and pass the four-channel RGB+D input to the SSL method. Second, we use a single-view view synthesis method that utilizes the depth map as input to generate novel 3D views and provides them as augmentations for contrastive learning. • We show that both of these approaches improve the performance of three different contrastive learning methods (BYOL, SimSiam, and SwAV) on both ImageNette and ImageNet-100 datasets. Our approaches can be integrated into any contrastive learning framework without incurring any significant computational cost and trained with the same hyperparameters as the base contrastive method. We achieve a 2.8% gain in the performance of BYOL with the addition of depth channel on ImageNette dataset. • Both approaches also yield representations that are more robust to image corruptions than the baseline SSL methods, as reflected in performance on ImageNet-C and ImageNet-3DCC. On the large-scale ImageNet-100 dataset, SimSiam+Depth outperforms base SimSiam model by 4% in terms of corruption robustness.

2. RELATED WORK

Self-Supervised Learning. The goal of self-supervised learning based methods is to learn a universal representation that can generalize to various downstream tasks. Earlier work on SSL relied on handcrafted pretext tasks like rotation (Gidaris et al., 2018 ), colorization (Zhang et al., 2016) and jigsaw (Noroozi & Favaro, 2016) . Recently, most of the state-of-the-art methods in SSL are based on contrastive representation learning. The goal of contrastive representation learning is to make the representations between two augmented views of the scene similar and also to make representations of views of different scenes dissimilar.



Figure 1: Improving Self-Supervised Learning by concatenating an input channel with estimated depth to the RGB input. Depth is estimated from both an original image and an augmentation, and the resulting 4-channel inputs are used to produce the representation. Incorporating the depth channel improves downstream accuracy in a variety of SSL techniques, with the largest improvements on challenging corrupted benchmarks. (Teaser results are shown. Complete results in Tables1, 2)

