NEURAL IMPLICIT SHAPE EDITING USING BOUNDARY SENSITIVITY

Abstract

Neural fields are receiving increased attention as a geometric representation due to their ability to compactly store detailed and smooth shapes and easily undergo topological changes. Compared to classic geometry representations, however, neural representations do not allow the user to exert intuitive control over the shape. Motivated by this, we leverage boundary sensitivity to express how perturbations in parameters move the shape boundary. This allows to interpret the effect of each learnable parameter and study achievable deformations. With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation. Prescribing the deformation only locally allows the rest of the shape to change according to some prior, such as semantics or deformation rigidity. Our method is agnostic to the model its training and updates the NN in-place. Furthermore, we show how boundary sensitivity helps to optimize and constrain objectives (such as surface area and volume), which are difficult to compute without first converting to another representation, such as a mesh.

1. INTRODUCTION

A neural field is a neural network (NN) mapping every point in a domain of interest, typically of 2 or 3 dimensions, to one or more outputs, such as a signed distance function (SDF), occupancy probability, opacity or color. This allows to represent smooth, detailed, and watertight shapes with topological flexibility, while being compact to store compared to classic implicit representations (Davies et al., 2020) . When the NN is trained not on a single shape but instead an entire collection, each shape is encoded in a latent vector, which is an additional input to the NN (Park et al., 2019; Chen & Zhang, 2019; Mescheder et al., 2019) . As a result, neural fields are receiving increased interest as a geometric representation in numerous applications, such as shape generation (Park et al., 2019) , shape completion (Chibane et al., 2020), shape optimization (Remelli et al., 2020), scene representation (Sitzmann et al., 2020) , and view synthesis (Mildenhall et al., 2020) . Some pioneering works have also investigated geometry processing, like smoothing and deformation, on neural implicit shapes (Yang et al., 2021; Remelli et al., 2020; Mehta et al., 2022; Guillard et al., 2021) , but these can be computationally costly or resort to intermediate mesh representations. In part, this difficulty stems from the shape being available only implicitly as the sub-level set of the field. While intuitive (often synonymous with local) geometric control is a key design principle of classic explicit or parametric representations (like meshes, splines, or subdivision schemes), it is not trivial to edit even classic implicit representations, especially ones with global functions (Baerentzen & Christensen, 2002) . Previous works on neural implicit shape editing have focused on the shape semantics, i.e. changing part-level features based on the whole shape structure, but achieve this through tailored training procedures or architectures or resort to intermediate mesh representations. We instead propose a framework which unifies geometric and semantic editing and which is agnostic to the model and its training and modifies the given model in-place akin to classic representations. To treat the geometry, not the field, as the primary object we consider boundary sensitivity to relate changes in the function parameters and the implicit shape. This allows us to express and interpret a basis for the displacement space.

