PARAMETRIZING PRODUCT SHAPE MANIFOLDS BY COMPOSITE NETWORKS

Abstract

Parametrizations of data manifolds in shape spaces can be computed using the rich toolbox of Riemannian geometry. This, however, often comes with high computational costs, which raises the question if one can learn an efficient neural network approximation. We show that this is indeed possible for shape spaces with a special product structure, namely those smoothly approximable by a direct sum of low-dimensional manifolds. Our proposed architecture leverages this structure by separately learning approximations for the low-dimensional factors and a subsequent combination. After developing the approach as a general framework, we apply it to a shape space of triangular surfaces. Here, typical examples of data manifolds are given through datasets of articulated models and can be factorized, for example, by a Sparse Principal Geodesic Analysis (SPGA). We demonstrate the effectiveness of our proposed approach with experiments on synthetic data as well as manifolds extracted from data via SPGA.

1. INTRODUCTION

Modeling collections of shapes as data on Riemannian manifolds has enabled the usage of a rich set of mathematical tools in areas such as computer graphics and vision, medical imaging, computational biology, and computational anatomy. For example, Principal Geodesic Analysis, a generalization of Principal Component Analysis, can be used to parametrize submanifolds approximating given data points while preserving structure of the data such as its invariance to rigid motion. The evaluation of such a parametrization, however, typically comes at a high computational cost as the Riemannian exponential, mapping infinitesimal shape variations to shapes, has to be evaluated. This motivates trying to learn an efficient approximation for these parametrizations. Direct application of deep neural networks (NNs), however, proves ineffective for high-dimensional spaces with strongly nonlinear variations. Therefore, we consider more structured shape manifolds, namely, we assume that they can be approximated by an affine sum of low-dimensional submanifolds. In computer graphics, typical examples of data manifolds are given through datasets of articulated models, e.g. human bodies, faces or hands. Then, the desired structure of an affine sum of factor manifolds can be produced, for example, by a Sparse Principal Geodesic Analysis (SPGA). Motivated by this, we exploit the data manifolds' approximability with such affine sums: We separately approximate the exponential map on the factor manifolds by fully connected NNs and the subsequent combination of factors by a convolutional NN to yield our approximate parametrization. In formulas, based on a judiciously chosen decomposition v = v 1 + . . . + v J , our aim is to approximate the Rieman- nian exponential exp z (v) by Ψ ζ (ψ ζ 1 (v 1 ), . . . , ψ ζ J (v J )) , where Ψ ζ is a NN and the ψ ζ j are further NNs approximating the Riemannian exponential exp z on the low-dimensional factor manifolds. We develop our approach focusing on the shape space of discrete shells, where shapes are given by triangle meshes and the manifold is equipped with an elasticity-based metric. In principle, our approach is also applicable to other shape spaces such as manifolds of images, and we will include remarks on how we propose this could work. We evaluate our approach with experiments on data manifolds of triangle meshes, both synthetic ones and ones extracted from data via SPGA, and we demonstrate that the proposed composite network architecture outperforms a monolithic fully connected network architecture as well as an approach based on the affine combination of the factors. We see this work as a first step to use NNs to accelerate the complex computations of shape manifold parameterizations. Therefore, we think that our approach has great potential to stimulate further research in this direction, which could in turn advance the applications of Riemannian shape spaces.

