IMPLICIT NEURAL SPATIAL REPRESENTATIONS FOR TIME-DEPENDENT PDES

Abstract

Numerically solving partial differential equations (PDEs) often entails spatial and temporal discretizations. Traditional methods (e.g., finite difference, finite element, smoothed-particle hydrodynamics) frequently adopt explicit spatial discretizations, such as grids, meshes, and point clouds, where each degree-offreedom corresponds to a location in space. While these explicit spatial correspondences are intuitive to model and understand, these representations are not necessarily optimal for accuracy, memory-usage, or adaptivity. In this work, we explore implicit neural representation as an alternative spatial discretization, where spatial information is implicitly stored in the neural network weights. With implicit neural spatial representation, PDE-constrained time-stepping translates into updating neural network weights, which naturally integrates with commonly adopted optimization time integrators. Our approach requires neither training data nor training/testing separation. Our method is the solver itself, just like the classical PDE solver. We validate our approach on a variety of classic PDEs with examples involving large elastic deformations, turbulent fluids, and multi-scale phenomena. While slower to compute than traditional representations, our approach exhibits higher accuracy, lower memory consumption, and dynamically adaptive allocation of degrees of freedom without complex remeshing.

1. INTRODUCTION

Many science and engineering problems can be formulated as spatiotemporal partial differential equations (PDEs), F (f , ∇f , ∇ 2 f , . . . , ḟ , f , . . .) = 0, f (x, t) : Ω × T → R d . ( ) where Ω ∈ R m and T ∈ R are the spatial and temporal domains, respectively. Examples include the inviscid Navier-Stokes equations for fluid dynamics and the elastodynamics equation for solid mechanics. To numerically solve these PDEs, we oftentimes introduce temporal discretizations, {t n } T n=0 , where T is the number of temporal discretization samples and ∆t = t n+1 -t n is the time step size. The solution to Equation (1) then becomes a list of spatially dependent vector fields: {f n (x)} T n=0 . Traditional approaches represent these spatially dependent vector fields using grids, meshes, or point clouds. For example, the grid-based linear finite element method (Hughes, 2012) defines a shape function N i on each grid node and represents the spatially dependent vector field as f n (x) = P i=1 f n i N i , where P is the number of spatial samples. While widely adopted in scientific computing applications, these traditional spatial representations are not without drawbacks: 1. Spatial discretization errors abound in fluid simulations as artificial numerical diffusion (Lantz, 1971 ), dissipation (Fedkiw et al., 2001 ), and viscosity (Roache, 1998) . These errors also appear in solid simulations as inaccurate collision resolution (Müller et al., 2015) and numerical fractures (Sadeghirad et al., 2011) . 2. Memory usage spikes with the number of spatial samples P (Museth, 2013). We alleviate these limitations by exploring implicit neural representation (Park et al., 2019; Chen & Zhang, 2019; Mescheder et al., 2019) as an alternative spatial representation for PDE solvers. Unlike traditional representations that explicitly discretize the spatial vector via spatial primitives (e.g., points), neural spatial representations implicitly encode the field through neural network weights. In other words, the field is parameterized by a neural network (typically multilayer perceptrons), i.e., f n (x) = f θ n (x) with θ n being the network weights. As such, the memory usage for storing the spatial field is independent of the number of spatial samples, but rather it is determined by the number of neural network weights. We show that under the same memory constraint, implicit neural representations indeed achieve higher accuracies than traditional discrete representations. Furthermore, implicit neural representations are adaptive by construction (Xie et al., 2021) , allocating the network weights to resolve field details at any spatial location without changing the network architecture. Viewed from the lens of optimization-based time integrators, our PDE solver seeks neural network weights that optimize an incremental potential over time (Kane et al., 2000b) . Our solver does not employ the so-called training/testing split commonly appearing in many neural-network-based PDE approaches (Sanchez-Gonzalez et al., 2020; Li et al., 2020b) . Our approach is the solver itself and does not require training in the machine learning sense. As such, we avoid using the word "training" in the exposition but rather use "optimizing". We employ exactly the same "optimization" integrator formulation as the classical solvers (e.g., finite element method (Bouaziz et al., 2014) ). We compare the proposed solver to grid, mesh, and point cloud representations on time-dependent PDEs from various disciplines, and find that our approach trades wall-clock runtime in favor of three benefits: lower discretization error, lower memory usage, and built-in adaptivity. 



Figure 1: 1D advection example: A Gaussian-shaped wave initially centered at = -1.5 moves rightward with a constant velocity of 0.25. From left to right, we show mean absolute error plot over time and solutions at t = 0s, t = 3s and t = 12s, respectively. The solution from grid-based finite difference method (green) tends to diffuse over time. PINN (yellow), trained within temporal range 0 ∼ 3s, fails to generalize for t = 12s. Our solution (blue) approximates the ground truth (grey) the best over time. All three representations have the same memory footprint: our approach and PINN(Raissi et al., 2019)  both use α = 2 hidden layers of width β = 20, and the finite difference grid resolution is 901.3. Adaptive meshing(Narain et al., 2012)  and data structures(Setaluri et al., 2014)  can reduce memory footprints but are often computationally expensive and challenging to implement.

Many prior works have explored representing continuous vector fields with neural networks. Here we highlight two lines of work: implicit neural representation and physics informed neural network. Implicit Neural Representation uses neural networks to parameterize spatially-dependent functions. It has successfully captured the radiance fields (Mildenhall et al., 2020) and the signed distance fields (Park et al., 2019) in computer vision and graphics settings. It has also captured the solutions of strictly spatially dependent PDEs from elastostatics (Zehnder et al., 2021), elliptic PDEs (Chiaramonte et al., 2013), and geometry processing (Yang et al., 2021). Chen et al. (2021), Pan et al. (2022), and Chen et al. (2022) also explore neural networks as spatial representations for dimension reduction. Dupont et al. (2022) develops a machine learning technique operating on data presented as implicit neural representations. Memory consumptions of traditional representations,

