DM-NERF: 3D SCENE GEOMETRY DECOMPOSITION AND MANIPULATION FROM 2D IMAGES

Abstract

In this paper, we study the problem of 3D scene geometry decomposition and manipulation from 2D views. By leveraging the recent implicit neural representation techniques, particularly the appealing neural radiance fields, we introduce an object field component to learn unique codes for all individual objects in 3D space only from 2D supervision. The key to this component is multiple carefully designed loss functions to enable every 3D point, especially in non-occupied space, to be effectively optimized without 3D labels. In addition, we introduce an inverse query algorithm to freely manipulate any specified 3D object shape in the learned scene representation. Notably, our manipulation algorithm can explicitly tackle key issues such as object collisions and visual occlusions. Our method, called DM-NeRF, is among the first to simultaneously reconstruct, decompose, manipulate and render complex 3D scenes in a single pipeline. Extensive experiments on three datasets clearly show that our method can accurately decompose all 3D objects from 2D views, allowing any interested object to be freely manipulated in 3D space such as translation, rotation, size adjustment, and deformation.

1. INTRODUCTION

In many cutting-edge applications such as mixed reality on mobile devices, users may desire to virtually manipulate objects in 3D scenes, such as moving a chair or making a flying broomstick in a 3D room. This would allow users to easily edit real scenes at fingertips and view objects from new perspectives. However, this is particularly challenging as it involves 3D scene reconstruction, decomposition, manipulation, and photorealistic rendering in a single framework (Savva et al., 2019) . A traditional pipeline firstly reconstructs explicit 3D structures such as point clouds or polygonal meshes using SfM/SLAM techniques (Ozyesil et al., 2017; Cadena et al., 2016) , and then identifies 3D objects followed by manual editing. However, these explicit 3D representations inherently discretize continuous surfaces, and changing the shapes often requires additional repair procedures such as remeshing (Alliez et al., 2002) . Such discretized and manipulated 3D structures can hardly retain geometry and appearance details, resulting in the generated novel views to be unappealing. Given this, it is worthwhile to design a new pipeline which can recover continuous 3D scene geometry only from 2D views and enable object decomposition and manipulation. Recently, implicit representations, especially NeRF (Mildenhall et al., 2020) , emerge as a promising tool to represent continuous 3D geometries from images. A series of succeeding methods (Boss et al., 2021; Chen et al., 2021; Zhang et al., 2021c) are rapidly developed to decouple lighting factors from structures, allowing free edits of illumination and materials. However, they fail to decompose 3D scene geometries into individual objects. Therefore, it is hard to manipulate individual object shapes in complex scenes. Recent works (Stelzner et al., 2021; Zhang et al., 2021b; Kania et al., 2022; Yuan et al., 2022; Tschernezki et al., 2022; Kobayashi et al., 2022; Kim et al., 2022; Benaim et al., 2022; Ren et al., 2022) have started to learn disentangled shape representations for potential geometry manipulation. However, they either focus on synthetic scenes or simple model segmentation, and can hardly extend to real-world 3D scenes with dozens of objects.

