MESH-FREE EULERIAN PHYSICS-INFORMED NEURAL NETWORKS

Abstract

Physics-informed Neural Networks (PINNs) have recently emerged as a principled way to include prior physical knowledge in form of partial differential equations (PDEs) into neural networks. Although PINNs are generally viewed as mesh-free, current approaches still rely on collocation points within a bounded region, even in settings with spatially sparse signals. Furthermore, if the boundaries are not known, the selection of such a region is difficult and often results in a large proportion of collocation points being selected in areas of low relevance. To resolve this severe drawback of current methods, we present a mesh-free and adaptive approach termed particle-density PINN (pdPINN), which is inspired by the microscopic viewpoint of fluid dynamics. The method is based on the Eulerian formulation and, different from classical mesh-free method, does not require the introduction of Lagrangian updates. We propose to sample directly from the distribution over the particle positions, eliminating the need to introduce boundaries while adaptively focusing on the most relevant regions. This is achieved by interpreting a nonnegative physical quantity (such as the density or temperature) as an unnormalized probability distribution from which we sample with dynamic Monte Carlo methods. The proposed method leads to higher sample efficiency and improved performance of PINNs. These advantages are demonstrated on various experiments based on the continuity equations, Fokker-Planck equations, and the heat equation.

1. INTRODUCTION

Many phenomena in physics are commonly described by partial differential equations (PDEs) which give rise to complex dynamical systems but often lack tractable analytical solutions. Important examples can be found for instance in fluid dynamics with typical applications in the design of gas and steam turbines (Oosthuizen & Carscallen, 2013) , as well as modeling the collective motion of self-driven particles (Marchetti et al., 2013) such as flocks of birds or bacteria colonies (Szabó et al., 2006; Nussbaumer et al., 2021) . Despite the relevant progress in establishing numerical PDE solvers, such as finite element and finite volume methods, the seamless incorporation of data remains an open problem (Freitag, 2020). To fill this gap, Physics-informed Neural Networks (PINNs) have emerged as an attractive alternative to classical methods for data-based forward and inverse solving of PDEs. The general idea of PINNs is to use the expressive power of modern neural architectures for solving partial differential equations (PDEs) in a data-driven way by minimizing a PDE-based loss, cf. Raissi et al. (2019) . Consider parameterized PDEs of the general form f (t, x|λ) := ∂ t u(t, x) + P (u|λ) = 0, (1) where P is a non-linear operator parameterized by λ, and ∂ t is the partial time derivative w.r.t. t ∈ [0, T ]. The position x ∈ Ω is defined on a spatial domain Ω ⊆ R d . The PDE is subject to initial condition g 0 u(0, x) = g 0 (x) (2) for x ∈ Ω, and boundary conditions g ∂Ω u(t, x) = g ∂Ω (x) (3) for x ∈ ∂Ω and t ∈ [0, T ]. The main idea of PINNs consists in approximating u(t, x) (and hence f (t, x)) with a neural network given a small set of N noisy observations u obs u(t (i) , x (i) ) + ϵ (i) = u 

