DEEP LEARNING SOLUTION OF THE EIGENVALUE PROBLEM FOR DIFFERENTIAL OPERATORS Anonymous

Abstract

Solving the eigenvalue problem for differential operators is a common problem in many scientific fields. Classical numerical methods rely on intricate domain discretization, and yield non-analytic or non-smooth approximations. We introduce a novel Neural Network (NN)-based solver for the eigenvalue problem of differential self-adjoint operators where the eigenpairs are learned in an unsupervised end-to-end fashion. We propose three different training procedures, for solving increasingly challenging tasks towards the general eigenvalue problem. The proposed solver is able to find the M smallest eigenpairs for a general differential operator. We demonstrate the method on the Laplacian operator which is of particular interest in image processing, computer vision, shape analysis among many other applications. Unlike other numerical methods such as finite differences, the partial derivatives of the network approximation of the eigenfunction can be analytically calculated to any order. Therefore, the proposed framework enables the solution of higher order operators and on free shape domain or even on a manifold. Non-linear operators can be investigated by this approach as well.

1. INTRODUCTION

Eigenfunctions and eigenvalues of the Laplacian (among other operators) are important in various applications ranging, inter alia, from image processing to computer vision, shape analysis and quantum mechanics. It is also of major importance in various engineering applications where resonance is crucial for design and safety [Benouhiba & Belyacine (2013) ]. Laplacian eigenfunctions allow us to perform spectral analysis of data measured at more general domains or even on graphs and networks [Shi & Malik (2000) ]. Additionally, the M -smallest eigenvalues of the Laplace-Beltrami operator are fundamental features for comparing geometric objects such as 3D shapes, images or point clouds via the functional maps method in statistical shape analysis [Ovsjanikov et al. (2012) ]. Moreover, in quantum mechanics, the smallest eigenvalues and eigenfunction of the Hamiltonian are of great physical significance [Han et al. (2019) ]. In this paper we present a novel numerical method for the computation of these eigenfunctions (efs) and eigenvalues (evs), where the efs are parameterized by NNs with continuous activation functions, and the evs are directly calculated via the Rayleigh quotient. The resulting efs are therefore smooth functions defined in a parametric way. This is in contrast to the finite element [Pradhan & Chakraverty (2019) ] and finite difference [Saad (2005) ; Knyazev (2000) ] methods in which the efs are defined on either a grid or as piecewise linear/polynomial functions with limited smoothness. In these matrix-based approaches one has to discretize first the problem and to represent it as an eigenvalue problem for a matrix. This in itself is prone to numerical errors. Following [Bar & Sochen (2019) ], we suggest an unsupervised approach to learn the eigenpairs of a differential operator on a specified domain with boundary conditions, where the network simultaneously approximates the eigenfunctions at every entry x. The method is based on a uniformly distributed point set which is trained to satisfy two fidelity terms of the eigenvalue problem formulated as the L 2 and L ∞ -like norms, boundary conditions, orthogonality constraint and regularization. There are several advantages of the proposed setting: (i) the framework is general in the sense that it can be used for non linear differential operators with high order derivatives as well. (ii) Since we sample the domain with a point cloud, we are not limited to standard domains. The problem can be therefore solved in an arbitrary regular domain. (iii) The framework is generic such that additional constraints and regularizers can be naturally integrated in the cost function. (iv) Unlike

