SEMANTIC-GUIDED REPRESENTATION ENHANCE-MENT FOR SELF-SUPERVISED MONOCULAR TRAINED DEPTH ESTIMATION

Abstract

Self-supervised depth estimation has shown its great effectiveness in producing high quality depth maps given only image sequences as input. However, its performance usually drops when estimating on border areas or objects with thin structures due to the limited depth representation ability. In this paper, we address this problem by proposing a semantic-guided depth representation enhancement method, which promotes both local and global depth feature representations by leveraging rich contextual information. In stead of a single depth network as used in conventional paradigms, we propose an extra semantic segmentation branch to offer extra contextual features for depth estimation. Based on this framework, we enhance the local feature representation by sampling and feeding the point-based features that locate on the semantic edges to an individual Semantic-guided Edge Enhancement module (SEEM), which is specifically designed for promoting depth estimation on the challenging semantic borders. Then, we improve the global feature representation by proposing a semantic-guided multi-level attention mechanism, which enhances the semantic and depth features by exploring pixel-wise correlations in the multi-level depth decoding scheme. Extensive experiments validate the distinct superiority of our method in capturing highly accurate depth on the challenging image areas such as semantic category borders and thin objects. Both quantitative and qualitative experiments on KITTI show that our method outperforms the state-of-the-art methods.

1. INTRODUCTION

Depth estimation is a long standing problem in computer vision community, which offers useful information to a wide range of tasks including robotic perception, augmented reality and autonomous driving, etc. Compared with depth estimation methods which rely on active vision or multi-view paradigms (Schonberger & Frahm, 2016; Li et al., 2019) , estimating depth from only a single image is highly ill-posed, thus brings greater challenges for higher quality results. In recent years, monocular depth estimation has witnessed new renaissances with the advent of deep learning (He et al., 2016; Jaderberg et al., 2015; Simonyan & Zisserman, 2014) . By learning deep representations in a supervised manner, various of networks (Eigen & Fergus, 2015; Eigen et al., 2014; Laina et al., 2016) are capable of producing high quality depth maps thanks to the large corpus of training data. At the mean time, consider the lack of labeled training data for network training, recent advances (Zhou et al., 2017; Godard et al., 2019; 2017) show that monocular depth estimation can be accomplished in a self-supervised way. The network can be trained by unlabeled image sequences using two-view geometric constraints, while achieving comparable results with the supervised paradigm. The learning-based methods managed to handle the highly ill-posed monocular depth estimation problem by implicitly learning the mapping between visual appearance and its corresponding depth value. However, despite the great effectiveness for learning-based depth estimation, these methods still struggle to conduct precise depth estimation on challenging image regions such as semantic category borders or thin object areas. For example, the estimated object depth usually fails to align with the real object borders, and the depth of foreground objects which have thin structures tends to be 1

