NON-EQUISPACED FOURIER NEURAL SOLVERS FOR PDES

Abstract

Solving partial differential equations is difficult. Recently proposed neural resolution-invariant models, despite their effectiveness and efficiency, usually require equispaced spatial points of data. However, sampling in spatial domain is sometimes inevitably non-equispaced in real-world systems, limiting their applicability. In this paper, we propose a Non-equispaced Fourier PDE Solver (NFS) with adaptive interpolation on resampled equispaced points and a variant of Fourier Neural Operators as its components. Experimental results on complex PDEs demonstrate its advantages in accuracy and efficiency. Compared with the spatiallyequispaced benchmark methods, it achieves superior performance with 42.85% improvements on MAE, and is able to handle non-equispaced data with a tiny loss of accuracy. Besides, to our best knowledge, NFS is the first ML-based method with mesh invariant inference ability to successfully model turbulent flows in non-equispaced scenarios, with a minor deviation of the error on unseen spatial points.

1. INTRODUCTION

Solving the partial differential equations (PDEs) holds the key to revealing the underlying mechanisms and forecasting the future evolution of the systems. However, classical numerical PDE solvers require fine discretization in spatial domain to capture the patterns and assure convergence. Besides, they also suffer from computational inefficiency. Recently, data-driven neural PDE solvers revolutionize this field by providing fast and accurate solutions for PDEs. Unlike approaches designed to model one specific instance of PDE (E & Yu, 2017; Bar & Sochen, 2019; Smith et al., 2020; Pan & Duraisamy, 2020; Raissi et al., 2020 ), neural operators (Guo et al., 2016; Sirignano & Spiliopoulos, 2018; Bhatnagar et al., 2019; KHOO et al., 2020; Li et al., 2020b; c; Bhattacharya et al., 2021; Brandstetter et al., 2022; Lin et al., 2022) directly learn the mapping between infinite-dimensional spaces of functions. They remedy the mesh-dependent nature of the finite-dimensional operators by producing a single set of network parameters that may be used with different discretizations. However, two problems still exist -discretization-invariant modeling for non-equispaced data and computational inefficiency compared with convolutional neural networks in the finite-dimensional setting. To alleviate the first problem, MPPDE (Brandstetter et al., 2022) lends basic modules in MPNN (Gilmer et al., 2017) to model the dynamics for spatially non-equispaced data, but even intensifies the time complexity due to the pushforward trick and suffers from unsatisfactory accuracy in complex systems (See Fig. 2(a) ). FNO (Li et al., 2020c) has achieved success in tackling the second problem of inefficiency and inaccuracy, while the spatial points must be equispaced due to its harnessing the fast Fourier transform (FFT). To sum up, two properties should be available in neural PDE solvers: (1) discretization-invariance and (2) equispace-unnecessity. Property (1) is shared by infinite-dimensional neural operators, in which the learned pattern can be generalized to unseen meshes. By contrast, classical vision models and graph spatio-temporal models are not discretization-invariant. Property (2) means that the model can handle irregularly-sampled spatial points. For example, graph spatio-temporal models do not require the data to be equispaced, but vision models are equispace-necessary, and limited to handling images as 2-d regular grids. And recently proposed methods can be classified into four types according to the two properties, as shown in Fig. 1 . As discussed, although the equispace-necessary methods enjoy fast parallel computation and low prediction error, they lack the ability to handle the spatially non-equispaced data. For these reasons, this paper aims to design a mesh-invariant model (defined in Fig. 1 ) called Non-equispaced Fourier neural Solver (NFS) with comparably low cost of computation and high accuracy, by lending the powerful expressivity of FNO and vision models to efficiently solve the complex PDE systems. Our paper including leading contributions is organized as follows: • In Sec. 2, we first give some preliminaries on neural operators as related work, with a brief introduction to Vision Mixers, to build a bridge between Fourier Neural Operator and Vision Mixers. Thus, we illustrate our motivation for the work: To establish a mesh-invariant neural operator, by harnessing the network structure of Vision Mixers. • In Sec. 3, we proposed a Non-equispaced Fourier Solver (NFS), with adaptive interpolation operators and a variant of Fourier Neural Operators as the components. Approximation theorems that guarantee the expressiveness of the proposed interpolation operators are developed. Further discussion gives insights into the relation between NFS, patchwise embedding and multipole graph models. • In Sec. 4, extensive experiments on different types of PDEs are conducted to demonstrate the superiority of our methods. Detailed ablation studies show that both the proposed interpolation kernel and the architecture of Vision Mixers contribute to the improvements in performance. 

2. BACKGROUND AND RELATED WORK

E x∼µ [C(G θ (a), u)(x)] ≈ 1 n s ns i=1 C(G θ (a), u)(x) To establish a mesh-invariant operator, X can be non-equispaced, and the learned G θ should be transferred to an arbitary discretization X ∈ D, where x ∈ X can be not necessarily contained in X. Because we focus on spatially non-equispaced points, when the PDE system is time-dependent, we assume that timestamps {t j } are uniformly sampled, which means we do not focus on temporally irregular sampling or continuous time problem (Rubanova et al., 2019; Chen et al., 2019; Çagatay Yıldız et al., 2019; Iakovlev et al., 2020) .



Figure 1: Four types of methods with or without the two concluded limitations.

PROBLEM STATEMENT Let D ∈ R d be the bounded and open spatial domain where n s -point discretization of the domain D written as X = {x i = (x 1 ≤ i ≤ n s } are sampled. The observation of input function a ∈ A(D; R da ) and output u ∈ U(D; R du ) on the n s points are denoted by {a(x i ), u(x i )} ns i=1 , where A(D; R da ) and U(D; R du ) are separable Banach spaces of function taking values in R da and R du respectively. Suppose x ∼ µ is i.i.d. sampled from the probability measure µ supported on D. An infinite-dimensional neural operator G θ : A(D; R da ) → U(D; R du ) parameterized by θ ∈ Θ, aims to build an approximation so that G θ (a) ≈ u. A cost functional C : U(D; R du ) × U(D; R du ) → R is defined to optimize the parameter θ of the operator by the objective min θ∈Θ

