MESH-FREE EULERIAN PHYSICS-INFORMED NEURAL NETWORKS

Abstract

Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation.

1. INTRODUCTION

Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013) , as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021) . Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020) . To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs. The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019) . Consider parameterized PDEs of the general form f (t, x|λ) := ∂ t u(t, x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂ t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ R d . The PDE is subject to initial condition g 0 u(0, x) = g 0 (x) (2) for x ∈ Ω, and boundary conditions g ∂Ω u(t, x) = g ∂Ω (x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t, x) (and hence f (t, x)) with a neural network given a small set of N noisy observations u obs u(t (i) , x (i) ) + ϵ (i) = u (i) obs (4) with noise ϵ (i) ≪ u (i) ∀i ∈ {0, 1, . . . , N }. This allows us to consider the following two important problem settings: If λ is known, the PDE is fully specified, and we aim to find a solution u in a data-driven manner by training a neural network. The PDE takes the role of a regularizer, where the particular physical laws provide our prior information. A second setting considers the inverse learning of the parameters λ by including them into the optimization process in order to infer physical properties such as the viscosity coefficient of a fluid (Jagtap et al., 2020) . Initial work on solving time-independent PDEs with neural networks with such PDE-based penalties was pioneered by Dissanayake & Phan-Thien (1994) and van Milligen et al. (1995) , with later adoptions such as Parisi et al. (2003) extending it to non-steady and time-dependent settings. Loss functions. Typically, PINNs approximate f (t, x) by the network f Θ (t, x) in which the parameters Θ are adjusted by minimizing the combined loss of (i) reconstructing available observations (L obs ), (ii) softly enforcing the PDE constraints on the domain (L f ), and (iii) fulfilling the boundary (L b ) and initial conditions (L init ), i.e.

Θ = arg min

Θ [w 1 L obs (X, t, u obs , Θ) + w 2 L f (Θ) + w 3 L b (Θ) + w 4 L init (Θ)] , with loss weights w i ∈ R ≥0 . A common choice for L obs , L b , and L init is the expected L 2 loss, approximated via the average L 2 loss over the observations and via sampled boundary and initial conditions, respectively. It should be noted that the formulation of the forward and inverse problem are identical in this setting, as observations and initial conditions are implemented in a similar manner. Enforcing the PDE. Although PINNs are by nature mesh-free, the PDE loss L f in Eq. 5 used for the soft enforcement of Eq. 1 requires a similar discretization step for approximating an integral over the continuous signal domain, L f (Θ) = 1 |[0, T ] × Ω| T t=0 Ω ||f Θ (t, x)|| 2 2 dx dt = E p(t,x) ||f Θ (t, x)|| 2 2 ≈ 1 n n i=1 ||f Θ (t i , x i )|| 2 2 (6) with p(t, x) being supported on [0, T ] × Ω. The points {(t (j) , x (j) )} n j=1 ⊂ [0, T ] × Ω on which the PDE loss is evaluated are commonly referred to as collocation points. This formulation of PINNs for solving Eq. 1 is an Eulerian one, as the function f Θ is updated by evaluating the PDE with respect to collocation points fixed in space. Initial approaches for selecting the collocation points in PINNs relied on a fixed grid (Lagaris et al., 1998; Rudd, 2013; Lagaris et al., 2000) , followed up by work proposing stochastic estimates of the integral via (Quasi-) Monte Carlo methods (Sirignano & Spiliopoulos, 2018; Lu et al., 2021; Chen et al., 2019) or Latin Hypercube sampling (Raissi et al., 2019) . However, these approaches to Eulerian PINNs cannot be directly applied if there are no known boundaries or boundary conditions, e.g. for Ω = R d . Additionally, problems can arise if the constrained region is large compared to the area of interest. Considering for example the shock wave (of a compressible gas) in a comparably large space, most collocation points would fall into areas of low density. We argue that due to the locality of particle interactions, the regions with higher density are more relevant for regularizing the network. To address these shortcomings of previous methods, we propose a mesh-free and adaptive approach for sampling collocation points, illustrated on the example of compressible fluids. By changing p(t, x) to the distribution over the particle positions in the fluid we effectively change the loss functional in Eq. 6. We then generalize to other settings, such as thermodynamics, by interpreting a positive, scalar quantity of interest with a finite integral as a particle density. Within this work we specifically focus on PDEs that can be derived based on local particle interactions or can be shown to be equivalent to such a view, as for example is the case for the heat equation with its connection to particle diffusion. Notably, we do not require the introduction of Lagrangian updates, as classical mesh-free methods do, which would be based on evaluating the PDE with respect to moving particles (see also section 2). Main contributions. The main contributions of this paper are as follows: • We demonstrate that PINNs with uniform sampling strategies (and refinement methods based on uniform proposals) fail in settings with spatially sparse signals as well as in unbounded signal domains; these problems can severely degrade the network's predictive performance.

