NEURAL PARTIAL DIFFERENTIAL EQUATIONS WITH FUNCTIONAL CONVOLUTION

Abstract

We present a lightweighted neural PDE representation to discover the hidden structure and predict the solution of different nonlinear PDEs. Our key idea is to leverage the prior of "translational similarity" of numerical PDE differential operators to drastically reduce the scale of learning model and training data. We implemented three central network components, including a neural functional convolution operator, a Picard forward iterative procedure, and an adjoint backward gradient calculator. Our novel paradigm fully leverages the multifaceted priors that stem from the sparse and smooth nature of the physical PDE solution manifold and the various mature numerical techniques such as adjoint solver, linearization, and iterative procedure to accelerate the computation. We demonstrate the efficacy of our method by robustly discovering the model and accurately predicting the solutions of various types of PDEs with small-scale networks and training sets. We highlight that all the PDE examples we showed were trained with up to 8 data samples and within 325 network parameters.

1. INTRODUCTION (1+ )

Problem definition We aim to devise a learning paradigm to solve the inverse PDE identification problem. By observing a small data set in the PDE's solution space with an unknown form of equations, we want to generate an effective neural representation that can precisely reconstruct the hidden structure of the target PDE system. This neural representation will further facilitate the prediction of the PDE solution with different boundary conditions. The right inset figure shows a typical example of our target problem: by observing a small part (4 samples in the figure) of the solution space of a nonlinear PDE system F(x) = b, without knowing its analytical equations, our neural representation will depict the hidden differential operators underpinning F (e.g., to represent the unknown differential operator ∇ • (1 + x 2 )∇ by training the model on the solution of ∇ • (1 + x 2 )∇x = b. Challenges to solve The nonlinearity and the curse of dimensionality of the target PDE's solution manifold are the two main challenges for the design of a high-performance neural discretization. An effective neural representation of a PDE system plays an essential role to solve these challenges. In retrospect, the design of neural PDE representations has been evolving from the raw, unstructured networks (e.g., by direct end-to-end data fitting) to various structured ones with proper mathematical priors embedded. Examples include the residual-based loss function (e.g., physics-informed networks Raissi et al., 2020; Lu et al., 2019; Raissi et al., 2019) , learnable convolution kernels (e.g., PDE-Nets Long et al., 2018a; b; 2019) , and hybrid of numerical stencils and MLP layers (e.g., see Amos & Kolter, 2017; Pakravan et al., 2020; Geng et al., 2020; Stevens & Colonius, 2020) . Following this line of research, we aim to devise a lightweighted neural PDE representation that fuses the mathematical equation's essential structure, the numerical solvers' computational efficiency, and the neural networks' expressive power. In particular, we want to aggressively reduce the scale of both model parameters and training data to some extremal extent, while extending the scope of the targeted PDE systems to a broad range, encompassing equations that are both linear and nonlinear, both steady-state and dynamic.

