LEARNING CONTROLLABLE ADAPTIVE SIMULATION FOR MULTI-RESOLUTION PHYSICS

Abstract

Simulating the time evolution of physical systems is pivotal in many scientific and engineering problems. An open challenge in simulating such systems is their multi-resolution dynamics: a small fraction of the system is extremely dynamic, and requires very fine-grained resolution, while a majority of the system is changing slowly and can be modeled by coarser spatial scales. Typical learning-based surrogate models use a uniform spatial scale, which needs to resolve to the finest required scale and can waste a huge compute to achieve required accuracy. In this work, we introduce Learning controllable Adaptive simulation for Multiresolution Physics (LAMP) as the first full deep learning-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions. LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNNbased actor-critic for learning the policy of spatial refinement and coarsening. We introduce learning techniques that optimizes LAMP with weighted sum of error and computational cost as objective, allowing LAMP to adapt to varying relative importance of error vs. computation tradeoff at inference time. We evaluate our method in a 1D benchmark of nonlinear PDEs and a challenging 2D mesh-based simulation. We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error: it achieves an average of 33.7% error reduction for 1D nonlinear PDEs, and outperforms MeshGraphNets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations.

1. INTRODUCTION

Simulating the time evolution of a physical system is of vital importance in science and engineering (Lynch, 2008; Carpanese, 2021; Sircombe et al., 2006; Courant et al., 1967; Lelievre & Stoltz, 2016) . Usually, the physical system has a multi-resolution nature: a small fraction of the system is highly dynamic, and requires very fine-grained resolution to simulate accurately, while a majority of the system is changing slowly. Examples include hazard prediction in weather forecasting (Majumdar et al., 2021) , disruptive instabilities in the plasma fluid in nuclear fusion (Kates-Harbeck et al., 2019) , air dynamics near the boundary for jet engine design (Athanasopoulos et al., 2009) , and more familiar examples such as wrinkles in a cloth (Pfaff et al., 2021) and fluid near the boundary for flow through the cylinder (Vlachas et al., 2022) . Due to the typical huge size of such systems, it is pivotal that those systems are simulated not only accurately, but also with as small of a computational cost as possible. A uniform spatial resolution that pays similar attention to regions with vastly different dynamics, will waste significant compute on slow-changing regions while may be insufficient for highly dynamic regions. et al., 2020; Kochkov et al., 2021) or temporal resolution (Li et al., 2021) , or via latent representations (Sanchez-Gonzalez et al., 2020; Wu et al., 2022) . However, current deep learning-based surrogate models typically assume a uniform or fixed spatial resolution, without learning how to best assign computation to the most needed spatial region. Thus, they may be insufficient to address the aforementioned multi-resolution challenge. Although adaptive methods, such as Adaptive Mesh Refinement (AMR) (Soner et al., 2003; Cerveny et al., 2019) exist for classical solvers, they share similar challenge (e.g., slow) as classical solvers. A deep learning-based surrogate models, that is able to learn both the evolution and learn to assign computation to the needed region, is needed. In this work, we introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first fully DL-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions. Our key insight is that by explicitly setting the error and computation as the combined objective to optimize, the model can learn to adaptively decide the best local spatial resolution to evolve the system. To achieve this goal, LAMP consists of a Graph Neural Network (GNN)-based evolution model for learning the forward evolution, and a GNN-based actor-critic for learning the policy of discrete actions of local refinement and coarsening of the spatial mesh, conditioned on the local state and a coefficient β that weights the relative importance of error vs. computation. The policy (actor) outputs both the number of refinement and coarsening actions, and which edges to refine or coarsen, while the critic evaluates the expected reward of the current policy. The full system is trained with an alternating fashion, iterating between training the evolution model with supervised loss, and training the actor-critic via reinforcement learning (RL). Taken together, a single instance of evolution model and actor-critic jointly optimizes reduction of error and computation for the physical simulation, and can operate across the relative importance of the two metrics at inference time. We evaluate our model on a 1D benchmark of nonlinear PDEs (which tests generalization across PDEs of the same family), and a challenging 2D mesh-based simulation of paper folding. In 1D, we show that our model outperforms state-of-the-art deep learning-based surrogate models in terms of long-term evolution error by 33.7%, and can adaptively tradeoff computation to improve long-term prediction error. On a 2D mesh-based simulation, our model outperforms state-of-the-art Mesh-GraphNets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations.

2. PROBLEM SETTING AND RELATED WORK

We consider the numerical simulation of a physical system, following the notation introduced in (Pfaff et al., 2021) . The system's state at time t is discretized into the mesh-based state M t =



Figure 1: LAMP schematic. The forward iteration (upper box) first uses the policy f policy φto decide the number K re and K co of edges as well as which edges among the full mesh to be refined or coarsened, and then executes remeshing and interpolation. The evolution model f evo θ is applied to the updated mesh M ′t to predict the state M t+1 at the next time step. We use the reduction of both Error and Computation (mesh size), compared to a multi-step rollout without remeshing, as reward to learn the policy. For more details, see Section 3.2.

