CONTINUOUS PDE DYNAMICS FORECASTING WITH IMPLICIT NEURAL REPRESENTATIONS

Abstract

Effective data-driven PDE forecasting methods often rely on fixed spatial and / or temporal discretizations. This raises limitations in real-world applications like weather prediction where flexible extrapolation at arbitrary spatiotemporal locations is required. We address this problem by introducing a new data-driven approach, DINO, that models a PDE's flow with continuous-time dynamics of spatially continuous functions. This is achieved by embedding spatial observations independently of their discretization via Implicit Neural Representations in a small latent space temporally driven by a learned ODE. This separate and flexible treatment of time and space makes DINO the first data-driven model to combine the following advantages. It extrapolates at arbitrary spatial and temporal locations; it can learn from sparse irregular grids or manifolds; at test time, it generalizes to new grids or resolutions. DINO outperforms alternative neural PDE forecasters in a variety of challenging generalization scenarios on representative PDE systems.

1. INTRODUCTION

Modeling the dynamics and predicting the temporal evolution of physical phenomena is paramount in many fields, e.g. climate modeling, biology, fluid mechanics and energy (Willard et al., 2022) . Classical solutions rely on a well-established physical paradigm: the evolution is described by differential equations derived from physical first principles, and then solved using numerical analysis tools, e.g. finite elements, finite volumes or spectral methods (Olver, 2014) . The availability of large amounts of data from observations or simulations has motivated data-driven approaches to this problem (Brunton & Kutz, 2022) , leading to a rapid development of the field with deep learning methods. The main motivations for this research track include developing surrogate or reduced order models that can approximate high-fidelity full order models at reduced computational costs (Kochkov et al., 2021) , complementing classical solvers, e.g. to account for additional components of the dynamics (Yin et al., 2021) , or improving low fidelity models (De Avila Belbute-Peres et al., 2020) . Most of these attempts rely on workhorses of deep learning like CNNs (Ayed et al., 2020) or GNNs (Li et al., 2020; Pfaff et al., 2021; Brandstetter et al., 2022) . They all require prior space discretization either on regular or irregular grids, such that they only capture the dynamics on the train grid and cannot generalize outside it. Neural operators, a recent trend, learn mappings between function spaces (Li et al., 2021b; Lu et al., 2021) and thus alleviate some limitations of prior discretization approaches. Yet, they still rely on fixed grid discretization for training and inference: e.g., regular grids for Li et al. (2021b) or a free-form but predetermined grid for Lu et al. (2021) . Hence, the number and / or location of the sensors has to be fixed across train and test which is restrictive in many situations (Prasthofer et al., 2022) . Mesh-agnostic approaches for solving canonical PDEs (Partial Differential Equations) are another trend (Raissi et al., 2019; Sirignano & Spiliopoulos, 2018) . In contrast to physics-agnostic grid-based approaches, they aim at solving a known PDE as usual solvers do, and cannot cope with unknown dynamics. This idea was concurrently developed for computer graphics, e.g. for learning 3D shapes (Sitzmann et al., 2020; Mildenhall et al., 2020; Tancik et al., 2020) and coined as Implicit Neural Representations (INRs). When used as solvers, these methods can only tackle a single initial value problem and are not designed for long-term forecasting outside the training horizon. & Bolton, 2021) . These considerations motivate the development of new machine learning models that improve existing approaches on several of these aspects. In our work, we aim at forecasting PDE-based spatiotemporal physical processes with a versatile model tackling the aforementioned limitations. We adopt an agnostic approach, i.e. not assuming any prior knowledge on the physics. We introduce DINO (Dynamics-aware Implicit Neural representations), a model operating continuously in space and time, with the following contributions. Continuous flow learning. DINO aims at learning the PDE's flow to forecast its solutions, in a continuous manner so that it can be trained on any spatial and temporal discretization and applied to another. To this end, DINO embeds spatial observations into a small latent space via INRs; then it models continuous-time evolution by a learned latent Ordinary Differential Equation (ODE). Space-time separation. To efficiently encode different sequences, we propose a novel INR parameterization, amplitude modulation, implementing a space-time separation of variables. This simplifies the learned dynamics, reduces the number of parameters and greatly improves performance. Spatiotemporal versatility. DINO combines the benefits of prior models; cf. 

2. PROBLEM DESCRIPTION

Problem setting. We aim at modeling, via a data-driven approach, the temporal evolution of a continuous fully-observed deterministic spatiotemporal phenomenon. It is described by trajectories v : R → V in a set Γ ; we use v t ≜ v(t) ∈ V. We focus on Initial Value Problems, where only v t at any time t is required to infer v t ′ for t ′ > t. Hence, trajectories share the same dynamics but differ by their initial condition v 0 ∈ V. R is the temporal domain and V is the functional space of the form Ω → R n , where Ω ⊂ R p is a compact spatial domain and n the number of observed values. In other words, v t is a spatial function of x ∈ Ω, with vectorial output v t (x) ∈ R n ; cf. examples of Section 5.1. To this end, we consider the setting illustrated in Figure 1 . We observe a finite training set of trajectories D with a free-form spatial observation grid X tr ⊂ Ω and on discrete times t ∈ T ⊂ [0, T ]. At test time, we are only given a new initial condition v 0 , with observed values v 0 | Xts on a new observation grid X ts , potentially different from X tr . Inference is performed on both train and test trajectories given only the initial condition, on a new free-form grid X ′ ⊂ Ω and times t ∈ T ′ ⊂ [0, T ′ ]. Inference grid X ′ comprises observed positions (respectively X tr and X ts for train and test trajectories) and unobserved positions corresponding to spatial interpolation. Note that the inference temporal horizon is larger than the train one: T < T ′ . For simplicity, In-s refers to data in X ′ on the observation grid (X tr for train / X ts for test), Out-s to data in X ′ outside the observation grid; In-t refers to times within the train horizon T ⊂ [0, T ], and Out-t to times in T ′ \ T ⊂ (T, T ′ ], beyond T , up to inference horizon T ′ .



Comparison of data-driven approaches to spatiotemporal PDE forecasting.Because of these limitations, none of the above approaches can handle situations encountered in many practical applications such as: different geometries, e.g. phenomena lying on a Euclidean plane or an Earth-like sphere; variable sampling, e.g. irregular observation grids that may evolve at train and test time as in adaptive meshing(Berger & Oliger, 1984); scarce training data, e.g. when observations are only available at a few spatiotemporal locations; multi-scale phenomena, e.g. in large scaledynamics systems as climate modeling, where integrating intertwined subgrid scales a.k.a. the closure problem is ubiquitous (Zanna

It tackles new sequences via its amplitude modulation. Sequential modeling with an ODE makes it extrapolate to unseen times within or beyond the train horizon. Thanks to INRs' spatial flexibility, it generalizes to new grids or resolutions, predicts at arbitrary positions and handles sparse irregular grids or manifolds. Empirical validation. We demonstrate DINO's versatility and state-of-the-art performance versus prior neural PDE forecasters, representative of grid, operator and INR-based methods, via thorough experiments on challenging multi-dimensional PDEs in various spatiotemporal generalization settings.

