CROM: CONTINUOUS REDUCED-ORDER MODELING OF PDES USING IMPLICIT NEURAL REPRESENTATIONS

Abstract

The long runtime of high-fidelity partial differential equation (PDE) solvers makes them unsuitable for time-critical applications. We propose to accelerate PDE solvers using reduced-order modeling (ROM). Whereas prior ROM approaches reduce the dimensionality of discretized vector fields, our continuous reduced-order modeling (CROM) approach builds a low-dimensional embedding of the continuous vector fields themselves, not their discretization. We represent this reduced manifold using continuously differentiable neural fields, which may train on any and all available numerical solutions of the continuous system, even when they are obtained using diverse methods or discretizations. We validate our approach on an extensive range of PDEs with training data from voxel grids, meshes, and point clouds. Compared to prior discretization-dependent ROM methods, such as linear subspace proper orthogonal decomposition (POD) and nonlinear manifold neuralnetwork-based autoencoders, CROM features higher accuracy, lower memory consumption, dynamically adaptive resolutions, and applicability to any discretization. For equal latent space dimension, CROM exhibits 79× and 49× better accuracy, and 39× and 132× smaller memory footprint, than POD and autoencoder methods, respectively. Experiments demonstrate 109× and 89× wall-clock speedups over unreduced models on CPUs and GPUs, respectively. Videos and codes are available on the project page: https://crom-pde.github.io.

1. INTRODUCTION

Many scientific and engineering models are posed as partial differential equations (PDEs) of the form F (f , ∇f , ∇ 2 f , . . . , ḟ , f , . . .) = 0, f (x, t) : Ω × T → R d , subject to initial and boundary conditions. Here f is a spatiotemporal dependent, multidimensional continuous vector field, such as temperature, velocity, or displacement; ∇ and ( •) are the spatial and temporal gradients; Ω ⊂ R m and T ⊂ R are the spatial and temporal domains, respectively. We may solve for f by discretizing in space, f (x, t) ≈ f P (x, t) = P i=1 a i (t)N i (x), transforming the continuous spatial representation to a (P • d)-dimensional vector whose coefficients a i (t) : T → R d and the corresponding basis functions N i (x) : Ω → R (e.g., polynomial basis, fourier basis) approximate the continuous solution. For instance, if N i is the linear finite element basis, the coefficients a i (t) = f (x i , t) are field values at spatial samples x i (Hughes, 2012). After introducing temporal samples {t n } T n=0 , we temporally evolve the solution by solving for P unknowns {a i (t n+1 )} given the previous state {a i (t n )}. Unfortunately, when P is large, processing and memory costs of these full-order solves become intractable. To alleviate this computational burden, prior model reduction techniques (Berkooz et al., 1993; Willcox & Peraire, 2002; Benner et al., 2015) construct a manifold-parameterization function g P : R r → R P d , with r ≪ P d, such that every low-dimensional latent space vector q(t) ∈ R r maps to a discrete field g P (q) → (a 1 , . . . , a P ) T . For instance, for linear finite elements (Barbič & James, 2005) Fixed discretization. We cannot dynamically adapt spatial resolution P , discretization type, or basis function N i during latent-space-PDE solves, e.g., dynamic remeshing (Peraire et al., 1987) . , g P (q) → f (x 1 , t), . . . f (x P , t) T , latent space trajectory 𝒒 𝑡 ! ∈ ℝ " , 𝑟 ≪ 𝑃𝑑 𝒒(𝑡 # ) 𝒒(𝑡 $ ) Altogether these problems arise because the architecture of g P (q) is tied to the discretization (afoot_0 , . . . , a P ) T . Introducing a discretization-independent architecture In an alternative point of departure, we train a manifold-parameterization function g(x, q) ≈ f (x, t) to approximate the continuous field itself, not its discretization (see Figure 1b ). Note that the domain and co-domain of g are continuous domains: they do not depend on the choice of discretization(s) used at any stage of the process, i.e., during preparation of training data, nor during latent-space-PDE solving. In this sense, the manifoldparameterization architecture is discretization independent. In our implementation, g is embodied as an implicit neural representation (Park et al., 2019; Chen & Zhang, 2019; Mescheder et al., 2019) , also known as a neural field, yielding a smooth and analytically-differentiable manifold-parameterization. This representation's memory footprint depends on the complexity of fields produced by the PDE, not the discretization resolution. After training, we evolve the latent variables, as governed by the PDE, for previously-unexplored parameters. Unlike approaches that discard the PDE after training, we evaluate the original PDE at a small number of domain points at every time integration step. We validate our approach on classic PDEs with discretized data from voxels, meshes, and point clouds. In comparison to the full-order model, our approach reduces the number of spatial degrees of freedom, memory, and computational cost. In comparison to prior linear and nonlinear discretization-dependent model reduction methods, our method exhibits higher accuracy and consumes less memory. To highlight another benefit of being discretization-agnostic, we demonstrate an elasticity simulation that readily adapts mesh resolution.

2. RELATED WORK

Reduced-Order Modeling for PDEs. Early works on identifying a low-dimensional latent space focused on linear methods (Berkooz et al., 1993; Holmes et al., 2012) , e.g., proper orthogonal decomposition (POD) or principal component analysis (PCA). Recent nonlinear manifolds (Fulton et al., 2019; Lee & Carlberg, 2020) , often constructed via autoencoder neural networks, have been



The latent space vector is also known as the feature, subspace, or state vector; or the generalized coordinates.



Figure1: Model reduction solves PDEs via temporal evolution of the low-dimensional latent space vector q(t). (a) Prior work assumes that the low-dimensional representation g P is built for the already-discretized vector field; (b) our approach constructs the manifold-parameterization function g directly for the continuous vector field itself. In this case, the vector field f represents the twisting material governed by the elastodynamics equation. as depicted in Figure 1a. ROM saves computation because it requires evolving only r ≪ P d latent space variables. 1 Since existing ROM approaches apply to already-discretized fields, model training and PDE solving are tied to the dimension and discretization type of the training data, causing key limitations: Discretization dependence. If we alter the training simulation resolution (P ) or the discretization types (e.g., meshes to point clouds), we must also alter the architecture and numbers of parameters. Memory scaling. Memory footprint grows with discretization resolution P .

