BCXN-PINN FOR COMPLEX GEOMETRY: SOLVING PDES WITH BOUNDARY CONNECTIVITY LOSS

Abstract

We present a novel loss formulation for efficient learning of complex dynamics from governing physics, typically described by partial differential equations (PDEs), using physics-informed neural networks (PINNs). In our experiments, existing versions of PINNs are seen to learn poorly in many problems, especially for complex geometries, as it becomes increasingly difficult to establish appropriate sampling strategy at the near boundary region. Overly dense sampling can adversely impede training convergence if the local gradient behaviors are too complex to be adequately modelled by PINNs. On the other hand, if the samples are too sparse, PINNs may over-fit the near boundary region, leading to incorrect solution. To prevent such issues, we propose a new Boundary Connectivity (BCXN) loss function which provides local structure approximation at the boundary. Our BCXN-loss can implicitly or explicitly impose such approximations during training, thus facilitating fast physics-informed learning across entire problem domains with order of magnitude fewer training samples. This method shows a few orders of magnitude smaller errors than existing methods in terms of the standard L2-norm metric, while using dramatically fewer training samples and iterations. Our proposed BCXN-PINN method does not pose any requirement on the differentiable property of the networks, and we demonstrate its benefits and ease of implementation on both multi-layer perceptron and convolutional neural network versions as commonly used in current physics-informed neural network literature.

1. INTRODUCTION

Physics-informed neural networks (PINNs) have emerged as a promising method for learning the solution of dynamical system from the governing physics (Raissi et al., 2019) . PINNs have recently been studied for a wide range of physical phenomena and applications across science and engineering domains -electromagnetic, fluid dynamics, heat transfer, etc (Karniadakis et al., 2021; Cuomo et al., 2022) . The distinctive feature of PINNs is the use of governing physics law, typically in the form of partial differential equations (PDEs), as the learning objective. This physics-informed learning constrains the PINN from violating the underlying physics at all training points sampled from the problem domain. Existing PINNs evaluate the PDE constraints in their training loss by either automatic differentiation (AD) or numerical differentiation (ND)-type method (Wandel et al., 2020) . While both methods have their pros and cons, ND-type PDE loss can be flexibly implemented across many different neural network (NN) architectures, including both multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs), because they do not require the NN to retain differentiability, unlike AD. Recent studies (Gao et al., 2021; Fang, 2021; Chiu et al., 2022) have also suggested that ND-loss can more robustly and efficiently produce accurate solutions with fewer training samples, whereas conventional AD-loss are prone to failure during training . This is because ND-type methods approximate high order derivatives using PINN output from neighbouring samples, hence, they can effectively connect sparse samples into piecewise regions via these local approximations, thereby facilitating fast physics-informed learning across the entire domain with less dense samples. When dealing with irregular geometries, it becomes increasingly difficult for existing PINNs to perfectly connect training samples in the domain's interior to the boundary. Failing to do so can cause undesirable training failure as the PINN starts to over-fit at the near boundary region. Since In the rest of this work, we present two versions of this BCXN-loss: i) a soft forcing approach which imposes a linear approximation constraint via an additional loss term, and ii) a direct forcing approach which strongly enforces a linear constraint during the evaluation of PDE loss at nearboundary samples. With the latter approach, there is no longer a need to explicitly evaluate the boundary samples as the exact BCs have been implicitly "infused" into the near-boundary samples. Moreover, the direct forcing BCXN-loss can be beneficial to CNN-type architectures which utilize a structured grid and lack the ability to model exact domain boundaries for irregular geometries. We present comprehensive experiments to demonstrate i) the flexible implementation of BCXN-PINN method for both MLP and CNN architectures; and ii) the effectiveness of BCXN-PINN for learning multiple complex fluid dynamical systems, spanning forward, inverse and meta-model problems in two-dimensions (2D) and three-dimensions (3D). Compared to conventional PINNs with the ADand ND-loss, our BCXN-PINNs with BCXN-loss are shown to be capable of tackling challenging PDE problems while using fewer training samples, hence expanding the exciting potential of PINNs for learning complex dynamical evolutions encountered in the real-world.

2. RELATED WORK

Efficient sampling in PINNs. The theoretical limit of physics-informed loss learning in relation to training samples has been provided by prior studies (Lu et al., 2021c; Mishra & Molinaro, 2022) . With the goal of improving the PINN training speed for practical applications (Markidis, 2021), several studies have focused on efficient sampling strategies such as importance sampling, adaptive sampling, and sequential sampling to reduce the amount of training samples being required during PINN trainings (Anitescu et al., 2019; McClenny & Braga-Neto, 2020; Wight & Zhao, 2020; Nabian et al., 2021; Lu et al., 2021a; Lye et al., 2021; Daw et al., 2022; Mattey & Ghosh, 2022; Wu et al., 2023) . Domain decomposition and parallelization strategies have also been explored to speed up the training (Jagtap et al., 2020; Jagtap & Karniadakis, 2021; Shukla et al., 2021; Dong & Li, 2021; Li et al., 2019; Kharazmi et al., 2021) . Our method differs from these works in that we make physicsinformed learning more robust in the sparse sample regime via a newly-proposed BCXN-loss.



many PDEs of practical interest are boundary-value problems, it is desirable to have the PINNs model the correct boundary behaviors. While adding dense samples to better refine the piecewise local regions near the boundary may improve accuracy, the extent to which sampling needs to be increased is empirical, and a denser sampling strategy may adversely impede training convergence.Hence, we propose a loss function formulation in this work that helps provide an approximation to the local gradient behaviour at the boundary, thereby restoring connectivity between domain boundary and near boundary interior samples. This new boundary connectivity (BCXN)-loss function is key to a novel class of BCXN-PINN method which can more efficiently learn the solution to PDEs with fewer training samples, regardless of domain geometry; see example in Fig.1. In addition, this method can be jointly implemented with other PINN advances such as in loss balancing, domain decomposition, adaptive sampling and other improved optimization methods(Zeng et al., 2022).

Figure 1: PINNs learning the solution of 2D N-S equations in a complex geometry (wavy channel flow problem, Re = 100). Our BCXN-PINN method can learn accurate solution with faster speed (50% less training iterations) and fewer (50% less) training samples.

