SEMANTIC-GUIDED REPRESENTATION ENHANCE-MENT FOR SELF-SUPERVISED MONOCULAR TRAINED DEPTH ESTIMATION

Abstract

Self-supervised depth estimation has shown its great effectiveness in producing high quality depth maps given only image sequences as input. However, its performance usually drops when estimating on border areas or objects with thin structures due to the limited depth representation ability. In this paper, we address this problem by proposing a semantic-guided depth representation enhancement method, which promotes both local and global depth feature representations by leveraging rich contextual information. In stead of a single depth network as used in conventional paradigms, we propose an extra semantic segmentation branch to offer extra contextual features for depth estimation. Based on this framework, we enhance the local feature representation by sampling and feeding the point-based features that locate on the semantic edges to an individual Semantic-guided Edge Enhancement module (SEEM), which is specifically designed for promoting depth estimation on the challenging semantic borders. Then, we improve the global feature representation by proposing a semantic-guided multi-level attention mechanism, which enhances the semantic and depth features by exploring pixel-wise correlations in the multi-level depth decoding scheme. Extensive experiments validate the distinct superiority of our method in capturing highly accurate depth on the challenging image areas such as semantic category borders and thin objects. Both quantitative and qualitative experiments on KITTI show that our method outperforms the state-of-the-art methods.

1. INTRODUCTION

Depth estimation is a long standing problem in computer vision community, which offers useful information to a wide range of tasks including robotic perception, augmented reality and autonomous driving, etc. Compared with depth estimation methods which rely on active vision or multi-view paradigms (Schonberger & Frahm, 2016; Li et al., 2019) , estimating depth from only a single image is highly ill-posed, thus brings greater challenges for higher quality results. In recent years, monocular depth estimation has witnessed new renaissances with the advent of deep learning (He et al., 2016; Jaderberg et al., 2015; Simonyan & Zisserman, 2014) . By learning deep representations in a supervised manner, various of networks (Eigen & Fergus, 2015; Eigen et al., 2014; Laina et al., 2016) are capable of producing high quality depth maps thanks to the large corpus of training data. At the mean time, consider the lack of labeled training data for network training, recent advances (Zhou et al., 2017; Godard et al., 2019; 2017) show that monocular depth estimation can be accomplished in a self-supervised way. The network can be trained by unlabeled image sequences using two-view geometric constraints, while achieving comparable results with the supervised paradigm. The learning-based methods managed to handle the highly ill-posed monocular depth estimation problem by implicitly learning the mapping between visual appearance and its corresponding depth value. However, despite the great effectiveness for learning-based depth estimation, these methods still struggle to conduct precise depth estimation on challenging image regions such as semantic category borders or thin object areas. For example, the estimated object depth usually fails to align with the real object borders, and the depth of foreground objects which have thin structures tends to be submerged in the background. We attribute these phenomena to the limited depth representation ability that (1) the pixel-wise local depth information can not be well represented by current depth network, especially on highly ambiguous, semantic border areas, (2) current depth representations are not capable of well describing depth foreground/background relationships globally. These issues lead to the wrong depth estimation from the true scene structures, which hinders the further applications in the real-world tasks. In this paper, we address this problem via enhancing both local and global depth feature representations for self-supervised monocular depth estimation via semantic guidance. As semantic segmentation conducts explicit category-level scene understandings and produces well-aligned object boundary detection, we propose an extra semantic estimation branch inside the self-supervised paradigm. The semantic branch offers rich contextual features which is fused with depth features during multiscale learning. Under this framework, we propose to enhance the depth feature using semantic guidance in a local-to-global way. To improve the local depth representations on semantic category borders, inspired by the sampling and enhancing strategies used in semantic segmentation (Kirillov et al., 2020) , our first contribution is to propose a Semantic-guided Edge Enhancement Module (SEEM) which is specially designed to enhance the local point-based depth representations that locate on the semantic edges. Different from the method in (Kirillov et al., 2020) , we enhance the point features using multi-level of representations from different domains under a self-supervised framework. To be specific, we sample a set of point positions lying on the binary semantic category borders, and extract the point-wise features of the corresponding edge positions from the encoded feature, depth decoding feature as well as the semantic decoding feature. We then merge and enhance the point-wise features and feed them to the final depth decoding features to promote the edge-aware depth inference for self-supervision. For global depth representation enhancement, our second contribution is to propose a semantic-guided multi-level attention module to improve the global depth representation. Different from the conventional self-attentions (Fu et al., 2019; Vaswani et al., 2017) which are implemented as single modules on the bottleneck feature block, we propose to explore the self-attentions on different level of decoding layers. In this way, both semantic and depth representations can be further promoted by exploring and leveraging the pixel-wise correlations inside of their fused features. We validate our method mainly on KITTI 2015 (Geiger et al., 2012) , and the Cityscapes (Cordts et al., 2016) benchmark is also used for evaluating the generalization ability. Experiments show that the proposed method significantly improves the depth estimation on category edges and thin scene structures. Extensive quantitative and qualitative results validate the superiority of our method that it outperforms the state-of-the-art methods for self-supervised monocular depth estimation.

2. RELATED WORK

There exist extensive researches on monocular depth estimation, including geometry-based methods (Schonberger & Frahm, 2016; Enqvist et al., 2011) and learning-based methods (Eigen et al., 2014; Laina et al., 2016) . In this paper, however, we concentrate only on the self-supervised depth training and semantic-guided depth estimation, which is highly related to the research focus of this paper. Self-supervised depth estimation. Self-supervised methods enable the networks to learn depth representation from merely unlabeled image sequences that they reformulate the depth supervision loss into the image reconstruction loss. Godard et al. (2017) and Garg et al. (2016) first propose the self-supervised method on stereo images, then Zhou et al. (2017) propose a monocular trained approach using a separate motion estimation network. Based on these frameworks, a large corpus of works seek to promote self-supervised learning from different aspects. For more robust selfsupervision signals, Mahjourian et al. (2018) propose to use 3D information as extra supervisory signal, another kind of methods leverage additional information such as the optical flow (Ranjan et al., 2019; Wang et al., 2019b) to strengthen depth supervision via consistency constraints. In order to solve the loss deviation problems on non-rigid motion areas, selective masks are used to filter out the these areas when computing losses. Prior works generate the mask by the network itself (Zhou et al., 2017; Yang et al., 2017; Vijayanarasimhan et al., 2017) , while the succeeding methods produce the mask by leveraging geometric clues (Bian et al., 2019; Wang et al., 2019a; Godard et al., 2019; Klingner et al., 2020) , which is proved to be more effective. There also exist other methods trying to enhance the network performance with traditional SfM (Schonberger & Frahm, 2016) , which offer pseudo labels for depth estimation. Guizilini et al. (2020a) propose a novel network architecture to improve depth estimation. In this paper, we do not consider the

