CONTINUOUS PDE DYNAMICS FORECASTING WITH IMPLICIT NEURAL REPRESENTATIONS

Abstract

Effective data-driven PDE forecasting methods often rely on fixed spatial and / or temporal discretizations. This raises limitations in real-world applications like weather prediction where flexible extrapolation at arbitrary spatiotemporal locations is required. We address this problem by introducing a new data-driven approach, DINO, that models a PDE's flow with continuous-time dynamics of spatially continuous functions. This is achieved by embedding spatial observations independently of their discretization via Implicit Neural Representations in a small latent space temporally driven by a learned ODE. This separate and flexible treatment of time and space makes DINO the first data-driven model to combine the following advantages. It extrapolates at arbitrary spatial and temporal locations; it can learn from sparse irregular grids or manifolds; at test time, it generalizes to new grids or resolutions. DINO outperforms alternative neural PDE forecasters in a variety of challenging generalization scenarios on representative PDE systems.

1. INTRODUCTION

Modeling the dynamics and predicting the temporal evolution of physical phenomena is paramount in many fields, e.g. climate modeling, biology, fluid mechanics and energy (Willard et al., 2022) . Classical solutions rely on a well-established physical paradigm: the evolution is described by differential equations derived from physical first principles, and then solved using numerical analysis tools, e.g. finite elements, finite volumes or spectral methods (Olver, 2014) . The availability of large amounts of data from observations or simulations has motivated data-driven approaches to this problem (Brunton & Kutz, 2022) , leading to a rapid development of the field with deep learning methods. The main motivations for this research track include developing surrogate or reduced order models that can approximate high-fidelity full order models at reduced computational costs (Kochkov et al., 2021) , complementing classical solvers, e.g. to account for additional components of the dynamics (Yin et al., 2021) , or improving low fidelity models (De Avila Belbute-Peres et al., 2020). Most of these attempts rely on workhorses of deep learning like CNNs (Ayed et al., 2020) or GNNs (Li et al., 2020; Pfaff et al., 2021; Brandstetter et al., 2022) . They all require prior space discretization either on regular or irregular grids, such that they only capture the dynamics on the train grid and cannot generalize outside it. Neural operators, a recent trend, learn mappings between function spaces (Li et al., 2021b; Lu et al., 2021) and thus alleviate some limitations of prior discretization approaches. Yet, they still rely on fixed grid discretization for training and inference: e.g., regular grids for Li et al. (2021b) or a free-form but predetermined grid for Lu et al. (2021) . Hence, the number and / or location of the sensors has to be fixed across train and test which is restrictive in many situations (Prasthofer et al., 2022) . Mesh-agnostic approaches for solving canonical PDEs (Partial Differential Equations) are another trend (Raissi et al., 2019; Sirignano & Spiliopoulos, 2018) . In contrast to physics-agnostic grid-based approaches, they aim at solving a known PDE as usual solvers do, and cannot cope with unknown dynamics. This idea was concurrently developed for computer graphics, e.g. for learning 3D shapes (Sitzmann et al., 2020; Mildenhall et al., 2020; Tancik et al., 2020) and coined as Implicit Neural Representations (INRs). When used as solvers, these methods can only tackle a single initial value problem and are not designed for long-term forecasting outside the training horizon. * Equal contribution 1

