BCXN-PINN FOR COMPLEX GEOMETRY: SOLVING PDES WITH BOUNDARY CONNECTIVITY LOSS

Abstract

We present a novel loss formulation for efficient learning of complex dynamics from governing physics, typically described by partial differential equations (PDEs), using physics-informed neural networks (PINNs). In our experiments, existing versions of PINNs are seen to learn poorly in many problems, especially for complex geometries, as it becomes increasingly difficult to establish appropriate sampling strategy at the near boundary region. Overly dense sampling can adversely impede training convergence if the local gradient behaviors are too complex to be adequately modelled by PINNs. On the other hand, if the samples are too sparse, PINNs may over-fit the near boundary region, leading to incorrect solution. To prevent such issues, we propose a new Boundary Connectivity (BCXN) loss function which provides local structure approximation at the boundary. Our BCXN-loss can implicitly or explicitly impose such approximations during training, thus facilitating fast physics-informed learning across entire problem domains with order of magnitude fewer training samples. This method shows a few orders of magnitude smaller errors than existing methods in terms of the standard L2-norm metric, while using dramatically fewer training samples and iterations. Our proposed BCXN-PINN method does not pose any requirement on the differentiable property of the networks, and we demonstrate its benefits and ease of implementation on both multi-layer perceptron and convolutional neural network versions as commonly used in current physics-informed neural network literature.

1. INTRODUCTION

Physics-informed neural networks (PINNs) have emerged as a promising method for learning the solution of dynamical system from the governing physics (Raissi et al., 2019) . PINNs have recently been studied for a wide range of physical phenomena and applications across science and engineering domains -electromagnetic, fluid dynamics, heat transfer, etc (Karniadakis et al., 2021; Cuomo et al., 2022) . The distinctive feature of PINNs is the use of governing physics law, typically in the form of partial differential equations (PDEs), as the learning objective. This physics-informed learning constrains the PINN from violating the underlying physics at all training points sampled from the problem domain. Existing PINNs evaluate the PDE constraints in their training loss by either automatic differentiation (AD) or numerical differentiation (ND)-type method (Wandel et al., 2020) . While both methods have their pros and cons, ND-type PDE loss can be flexibly implemented across many different neural network (NN) architectures, including both multi-layer perceptrons (MLPs) and convolutional neural networks (CNNs), because they do not require the NN to retain differentiability, unlike AD. Recent studies (Gao et al., 2021; Fang, 2021; Chiu et al., 2022) have also suggested that ND-loss can more robustly and efficiently produce accurate solutions with fewer training samples, whereas conventional AD-loss are prone to failure during training . This is because ND-type methods approximate high order derivatives using PINN output from neighbouring samples, hence, they can effectively connect sparse samples into piecewise regions via these local approximations, thereby facilitating fast physics-informed learning across the entire domain with less dense samples. When dealing with irregular geometries, it becomes increasingly difficult for existing PINNs to perfectly connect training samples in the domain's interior to the boundary. Failing to do so can cause undesirable training failure as the PINN starts to over-fit at the near boundary region. Since 1

