MULTISCALE NEURAL OPERATOR: LEARNING FAST AND GRID-INDEPENDENT PDE SOLVERS

Abstract

Numerical simulations in climate, chemistry, or astrophysics are computationally too expensive for uncertainty quantification or parameter-exploration at highresolution. Reduced-order or surrogate models are multiple orders of magnitude faster, but traditional surrogates are inflexible or inaccurate and pure machine learning (ML)-based surrogates too data-hungry. We propose a hybrid, flexible surrogate model that exploits known physics for simulating large-scale dynamics and limits learning to the hard-to-model term, which is called parametrization or closure and captures the effect of fine-onto large-scale dynamics. Leveraging neural operators, we are the first to learn grid-independent, non-local, and flexible parametrizations. Our multiscale neural operator is motivated by a rich literature in multiscale modeling, has quasilinear runtime complexity, is more accurate or flexible than state-of-the-art parametrizations and demonstrated on the chaotic equation multiscale Lorenz96.

1. INTRODUCTION

Climate change increases the likelihood of storms, floods, wildfires, heat waves, biodiversity loss and air pollution (IPCC, 2018) . Decision-makers rely on climate models to understand and plan for changes in climate, but current climate models are computationally too expensive: as a result, they are hard to access, cannot predict local changes (< 10km), fail to resolve local extremes (e.g., rainfall), and do not reliably quantify uncertainties (Palmer et al., 2019) . For example, running a global climate model at 1km resolution can take ten days on a 4888×GPU node supercomputer, consuming the same electricity as a coal power plants generates in one hour (Fuhrer et al., 2018) . Similarly, in molecular dynamics (Batzner et al., 2022 ), chemistry (Behler, 2011 ), biology (Yazdani et al., 2020 ), energy (Zhang et al., 2019) , astrophysics or fluids (Duraisamy et al., 2019) , scientific progress is hindered by the computational cost of solving partial differential equations (PDEs) at high-resolution (Karniadakis et al., 2021) . We are proposing the first PDE surrogate that quickly computes approximate solutions via correcting known large-scale simulations with learned, gridindependent, non-local parametrizations.

Grid-independent Neural Operator

Figure 1 : Multiscale neural operator (MNO). Explicitly modeling all scales of Earth's weather is too expensive for traditional and learning-based solvers (Palmer et al., 2019) . MNO dramatically reduces the computational cost by modeling the large-scale explicitly and learning the effect of fine-onto large-scale dynamics; such as turbulence slowing down a river stream. We embed a grid-independent neural operator in the large-scale physical simulations as a "parametrization", conceptually similar to stacking dolls (Snagglebit, 2022).

