PHYSICS-INFORMED NEURAL NETWORKS ON COMPLEX GEOMETRIES

Abstract

Physics-informed neural networks (PINNs) have demonstrated promise in solving forward and inverse problems involving partial differential equations. Despite recent progress on expanding the class of problems that can be tackled by PINNs, most of existing use-cases involve simple geometric domains. To date, there is no clear way to inform PINNs about the topology of the domain where the problem is being solved. In this work, we propose a novel positional encoding mechanism for PINNs based on the eigenfunctions of the Laplace-Beltrami operator. This technique allows to create an input space for the neural network that represents the geometry of a given object. We approximate the eigenfunctions as well as the operators involved in the partial differential equations with finite elements. We extensively test and compare the proposed methodology against traditional PINNs in complex shapes, such as a coil, a heat sink and a bunny, with different physics, such as the Eikonal equation and heat transfer. We also study the sensitivity of our method to the number of eigenfunctions used, as well as the discretization used for the eigenfunctions and the underlying operators. Our results show excellent agreement with the ground truth data in cases where traditional PINNs fail to produce a meaningful solution. We envision this new technique will expand the effectiveness of PINNs to more realistic applications.

1. MOTIVATION

Physics-informed neural networks (PINNs) Raissi et al. (2019) are an exciting new tool for blending data and known physical laws in the form of differential equations. They have been successfully applied to multiple physical domains such as fluid mechanics Raissi et al. (2020) Despite recent progress, many works still consider simple geometric domains to solve either forward or inverse problems, hindering the applicability of PINNs to real world problems, where the objects of study may have complicated shapes and topologies. In this area, there have been multiple attempts to introduce complexity to the input domain of the neural networks. One approach is to describe the boundary of the domain with a signed distance function Sukumar & Srivastava (2022); Berg & Nyström (2018); McFall & Mahan (2009) . In this way, the boundary conditions can be satisfied exactly instead of relying on a penalty term in the loss function. Another approach is to use domain decomposition to model smaller but simpler domains Jagtap & Karniadakis (2020) . Coordinate mappings between a simple domain have also been proposed for convolutional Gao et al. (2021) and fully connected networks Li et al. (2022) . Nonetheless, all these works demonstrate examples of 2-dimensional shapes. When using PINNs in 3-dimensional surfaces there have been attempts to ensure that the vector fields that may appear in the partial differential equations remain tangent to the surface by introducing additional terms in the loss function Fang et al. (2021); Sahli Costabal et al. (2020) . This approach works well when the surface is relatively simple and smooth, specifically when the Euclidean distance of the embedding 3-D space between two points in the domain is similar to the intrinsic geodesic distance on the manifold. However, in several applications the two distances may sensibly differ, as exemplified in Figure 1 . In this work, we propose to represent the coordinates of the input geometry with a positional encoding based on the eigenfunctions of the Laplace-Beltrami operator of the manifold, or the Laplacian in the case of a bounded open domain in the Euclidean space. In this way, points that are close in the geometry will remain close in the positional encoding space. Next, the input of the neural network will be the value of a finite number of the eigenfunctions evaluated at a point in the domain and the output will remain the physical quantity that we are modeling with PINNs. 2021a) to name a few. For our positional encoding, the Laplace-Beltrami eigenfunctions can be approximated numerically for any shape with the finite element method. By approximating the eigenfunctions on a mesh we lose the ability to use automatic differentiation to compute the operators of the partial differential equations within existing library codes. However, automatic differentiation could be preserved whenever the finite element library offers such capability, as many existing libraries do. We show that common operators such as the gradient and the Laplacian can be efficiently computed with finite elements applied to the output of the neural network. We demonstrate the capabilities of the proposed method by testing different geometries, such as a coil, a heat sink and a bunny and different physics, such as the Eikonal equation and heat transfer. 

2.1. PHYSICS-INFORMED NEURAL NETWORKS

We consider the problem where we have partial observations of an unknown function u(x), with an input domain x ∈ B, where B is an open and bounded domain in R d , d = 2, 3 or a d-dimensional smooth manifold (typically a surface). We also assume that u(x) satisfies a partial differential equation of the form N [u, λ] = 0, where N [•, λ] is a potentially non-linear operator parametrized by λ. The partial observations u i are located in B at positions x i , i = 1, . . . , N , which may therefore fall on the boundary ∂B. Boundary conditions such as Neumann boundary conditions ∇u • n = g i , may also be enforced at boundary points x b i , i = 1, . . . , B. We proceed by approximating the unknown function with a neural network u ≈ û = N N (x, θ), parametrized with trainable parameters θ. In order to learn a function that satisfies the observed data, the boundary conditions



, solid mechanics Haghighat et al. (2021), heat transfer Cai et al. (2021) and biomedical engineering Kissas et al. (2020); Ruiz Herrera et al. (2022), to name a few. Even though this technique can be used to solve forward problems, they tend to excel when performing inverse problems. Since their inception, there have been multiple attempts to improve PINNs, in areas such as training strategies Nabian et al. (2021); Wang et al. (2022) and activation functions Jagtap et al. (2020).

Positional encoding have shown great success in improving the capabilities of neural networks and are used in transformers Vaswani et al. (2017), neural radiance fields Mildenhall et al. (2020) and PINNs Wang et al. (

Figure 1: Learning the Eikonal equation on a coil. Top row: the 1 st , 10 th , 50 th and 100 th Laplace-Beltrami eigenfunction of the geometry. Bottom row, first: the ground truth solution of the Eikonal equation, which represents the geodesic distance, and the data points used for training shown as gray spheres. Second, the solution of ∆-PINNs, our proposed method trained with 50 eigenfunctions, Third, the traditional PINNs approximation. Last, the approximate solution of a physics-informed graph-convolution network.

