PHYSICS-INFORMED NEURAL NETWORKS ON COMPLEX GEOMETRIES

Abstract

Physics-informed neural networks (PINNs) have demonstrated promise in solving forward and inverse problems involving partial differential equations. Despite recent progress on expanding the class of problems that can be tackled by PINNs, most of existing use-cases involve simple geometric domains. To date, there is no clear way to inform PINNs about the topology of the domain where the problem is being solved. In this work, we propose a novel positional encoding mechanism for PINNs based on the eigenfunctions of the Laplace-Beltrami operator. This technique allows to create an input space for the neural network that represents the geometry of a given object. We approximate the eigenfunctions as well as the operators involved in the partial differential equations with finite elements. We extensively test and compare the proposed methodology against traditional PINNs in complex shapes, such as a coil, a heat sink and a bunny, with different physics, such as the Eikonal equation and heat transfer. We also study the sensitivity of our method to the number of eigenfunctions used, as well as the discretization used for the eigenfunctions and the underlying operators. Our results show excellent agreement with the ground truth data in cases where traditional PINNs fail to produce a meaningful solution. We envision this new technique will expand the effectiveness of PINNs to more realistic applications.



Despite recent progress, many works still consider simple geometric domains to solve either forward or inverse problems, hindering the applicability of PINNs to real world problems, where the objects of study may have complicated shapes and topologies. In this area, there have been multiple attempts to introduce complexity to the input domain of the neural networks. One approach is to describe the boundary of the domain with a signed distance function Sukumar & Srivastava (2022); Berg & Nyström (2018); McFall & Mahan (2009) . In this way, the boundary conditions can be satisfied exactly instead of relying on a penalty term in the loss function. Another approach is to use domain decomposition to model smaller but simpler domains Jagtap & Karniadakis (2020) . Coordinate mappings between a simple domain have also been proposed for convolutional Gao et al. (2021) and fully connected networks Li et al. (2022) . Nonetheless, all these works demonstrate examples of 2-dimensional shapes. When using PINNs in 3-dimensional surfaces there have been attempts to ensure that the vector fields that may appear in the partial differential equations remain tangent to the surface by introducing additional terms in the loss function Fang et al. (2021); Sahli Costabal et al. (2020) . This approach works well when the surface is relatively simple and smooth, specifically when the Euclidean distance of the embedding 3-D space between two points in the domain is similar to the intrinsic geodesic distance on the manifold. However, in several applications the two distances may sensibly differ, as exemplified in Figure 1 .



neural networks (PINNs) Raissi et al. (2019) are an exciting new tool for blending data and known physical laws in the form of differential equations. They have been successfully applied to multiple physical domains such as fluid mechanics Raissi et al. (2020), solid mechanics Haghighat et al. (2021), heat transfer Cai et al. (2021) and biomedical engineering Kissas et al. (2020); Ruiz Herrera et al. (2022), to name a few. Even though this technique can be used to solve forward problems, they tend to excel when performing inverse problems. Since their inception, there have been multiple attempts to improve PINNs, in areas such as training strategies Nabian et al. (2021); Wang et al. (2022) and activation functions Jagtap et al. (2020).

