FOURIER PINNS: FROM STRONG BOUNDARY CONDI-TIONS TO ADAPTIVE FOURIER BASES

Abstract

Interest in Physics-Informed Neural Networks (PINNs) is rising as a mesh-free alternative to traditional numerical solvers for partial differential equations (PDEs). While successful, PINNs often struggle to learn high-frequency and multi-scale target solutions-which, according to prior analysis, might arise from competition during optimization between the weakly enforced boundary loss and residual loss terms. By creatively modifying the neural network architecture, some simple boundary conditions (BCs) can be satisfied exactly without jointly optimizing an additional loss term, thus avoiding the aforementioned competition. Motivated by this analysis, we first study a strong BC version of PINNs for Dirichlet BCs and observe a consistent improvement compared to the standard PINNs. We conducted a Fourier analysis and found that strong BC PINNs can better learn the amplitudes of high-frequency components of the target solutions. While BC PINNs provide improvement, constructing such architectures is an intricate process made difficult (if not impossible) by certain BCs and domain geometries. Enlightened by our analysis, we propose Fourier PINNs -a simple, general, yet powerful method that augments PINNs with pre-specified, dense Fourier bases. Our proposed architecture likewise better learns high-frequency components but places no restrictions on the particular BCs. We developed an adaptive learning and basis selection algorithm based on alternating NN basis optimization, Fourier and NN basis coefficient estimation, and coefficient truncation. This scheme can flexibly identify the significant frequencies while weakening the nominal to better capture the target solution's power spectrum. We show the advantage of our approach in a set of systematic experiments.

1. Introduction

Physics-informed neural networks (PINNs) (Raissi et al., 2019a) are emergent mesh-free approaches to solving partial differential equations (PDE)s. They have shown successful in many scientific and engineering problems, such as bio-engineering (Sahli Costabal et al., 2020; Kissas et al., 2020) , fluids mechanics (Raissi et al., 2019b; Sun et al., 2020; Raissi et al., 2020 ), fractional PDEs (Pang et al., 2019b; 2020) , and material design (Fang & Zhan, 2019; Liu & Wang, 2019) . The PINN framework uses neural networks (NNs) to estimate PDE solutions, in light of the universal approximation ability of the NNs. Specifically, consider a PDE of the following general form, F[u](x) = f (x) (x ∈ Ω), u(x) = g(x) (x ∈ ∂Ω) ( ) where F is the differential operator for the PDE, Ω is the domain, ∂Ω is the boundary of the domain. To solve the PDE, the PINN uses a deep neural network u θ (x) to represent the solution u, samples N collocation points {x i c } N i=1 from Ω and M points {x i b } M i=1 from ∂Ω, and minimizes the loss, θ * = argmin θ L b (θ) + L r (θ) where L b (θ) = 1 M M j=1 u θ (x j b ) -g(x j b ) 2 is the boundary loss to fit the boundary condition, and L r (θ) = 1 N N j=1 F[ u θ ](x j c ) -f (x j c ) 2 is the residual loss to fit the equation. Despite their success, the training of PINNs is often unstable, and the performance can be poor from time to time, especially when solutions includes high-frequency and multi-scale components. 1

