LEARNING INCOMPRESSIBLE FLUID DYNAMICS FROM SCRATCH -TOWARDS FAST, DIFFERENTIABLE FLUID MODELS THAT GENERALIZE

Abstract

Fast and stable fluid simulations are an essential prerequisite for applications ranging from computer-generated imagery to computer-aided design in research and development. However, solving the partial differential equations of incompressible fluids is a challenging task and traditional numerical approximation schemes come at high computational costs. Recent deep learning based approaches promise vast speed-ups but do not generalize to new fluid domains, require fluid simulation data for training, or rely on complex pipelines that outsource major parts of the fluid simulation to traditional methods. In this work, we propose a novel physics-constrained training approach that generalizes to new fluid domains, requires no fluid simulation data, and allows convolutional neural networks to map a fluid state from time-point t to a subsequent state at time t + dt in a single forward pass. This simplifies the pipeline to train and evaluate neural fluid models. After training, the framework yields models that are capable of fast fluid simulations and can handle various fluid phenomena including the Magnus effect and Kármán vortex streets. We present an interactive real-time demo to show the speed and generalization capabilities of our trained models. Moreover, the trained neural networks are efficient differentiable fluid solvers as they offer a differentiable update step to advance the fluid simulation in time. We exploit this fact in a proof-of-concept optimal control experiment. Our models significantly outperform a recent differentiable fluid solver in terms of computational speed and accuracy.

1. INTRODUCTION

Simulating the behavior of fluids by solving the incompressible Navier-Stokes equations is of great importance for a wide range of applications and accurate as well as fast fluid simulations are a long-standing research goal. On top of simulating the behavior of fluids, several applications such as sensitivity analysis of fluids or gradient-based control algorithms rely on differentiable fluid simulators that allow to propagate gradients throughout the simulation (Holl et al. (2020) ). Recent advances in deep learning aim for fast and accurate fluid simulations but rely on vast datasets and / or do not generalize to new fluid domains. Kim et al. (2019) present a framework to learn parameterized fluid simulations and allow to interpolate efficiently in between such simulations. However, their work does not generalize to new domain geometries that lay outside the training data. Kim & Lee (2020) train a RNN-GAN that produces turbulent flow fields within a pipe domain, but do not show generalization results beyond pipe domains. Xie et al. (2018) introduce a tempoGAN to perform temporally consistent superresolution of smoke simulations. This allows to produce plausible high-resolution smoke-density fields for arbitrary low-resolution inputs, but our fluid model should output a complete fluid state description consisting of a velocity and a pressure field. Tompson et al. (2017) present how a Helmholtz projection step can be learned to accelerate Eulerian fluid simulations. This method generalizes to new domain geometries, but a particle tracer is needed to deal with the advection term of the Navier-Stokes equations. Furthermore, as Eulerian fluids do not model viscosity, effects like e.g. the Magnus effect or Kármán vortex streets cannot be simulated. Geneva & Zabaras (2020) propose a physics-informed framework to learn the entire update step for the Burgers equations in 1D and 2D, but no generalization results for new domain geometries are demonstrated. All of the aforementioned methods rely on the availability of vast amounts of data from fluid-solvers such as FEniCS, OpenFOAM or Mantaflow. Most of these methods do not generalize well or outsource a major part of the fluid simulation to traditional methods such as low-resolution fluid solvers or a particle tracer. In this work, we propose a novel unsupervised training framework to learn incompressible fluid dynamics from scratch. It does not require any simulated fluid-data (neither as ground truth data, nor to train an adversarial network, nor to initialize frames for a physics-constrained loss) and generalizes to fluid domains unseen during training. It allows CNNs to learn the entire update-step of mapping a fluid domain from time-point t to t + dt without having to rely on low resolution fluid-solvers or a particle-tracer. In fact, we will demonstrate that a physicsconstrained loss function combined with a simple strategy to recycle fluid-data generated by the neural network at training time suffices to teach CNNs fluid dynamics on increasingly realistic statistics of fluid states. This drastically simplifies the training pipeline. Fluid simulations get efficiently unrolled in time by recurrently applying the trained model on a fluid state. Furthermore, the fluid models include viscous friction and handle effects such as the Magnus effect and Kármán vortex streets. On top of that, we show by a gradient-based optimal control example how backpropagation through time can be used to differentiate the fluid simulation. Code and pretrained models are publicly available at https://github.com/aschethor/ Unsupervised_Deep_Learning_of_Incompressible_Fluid_Dynamics/.

2. RELATED WORK

In literature, several different approaches can be found that aim to approximate the dynamics of PDEs in general and fluids in particular with efficient, learning-based surrogate models. Lagrangian methods such as smoothed particle hydrodynamcs (SPH) Gingold & Monaghan (1977) handle fluids from the perspective of many individual particles that move with the velocity field. Following this approach, learning-based methods using regression forests by Ladický et al. ( 2015 2018)). However, these networks are trained on a specific domain and cannot generalize to new environments or be used in interactive scenarios.



), graph neural networks by Mrowca et al. (2018); Li et al. (2019) and continuous convolutions by Ummenhofer et al. (2020) have been developed. In addition, Smooth Particle Networks (SP-Nets) by Schenck & Fox (2018) allow for differentiable fluid simulations within the Lagrangian frame of reference. These Lagrangian methods are particularly suitable when a fluid domain exhibits large, dynamic surfaces (e.g. waves or droplets). However, to simulate the dynamics within a fluid domain accurately, Eulerian methods, that treat the Navier-Stokes equations in a fixed frame of reference, are usually better suited. Continuous Eulerian methods allow for mesh-free solutions by mapping domain coordinates (e.g. x,y,t) directly onto field values (e.g. velocity v / pressure p) (Sirignano & Spiliopoulos (2018); Grohs et al. (2018); Khoo et al. (2019)). Recent applications focused on flow through porous media (Zhu & Zabaras (2018); Zhu et al. (2019); Tripathy & Bilionis (2018)), fluid modeling (Yang et al. (2016); Raissi et al. (2018)), turbulence modeling (Geneva & Zabaras (2019); Ling et al. (2016)) and modeling of molecular dynamics (Schöberl et al. (2019)). Training is usually based on physics-constrained loss functions that penalize residuals of the underlying PDEs. Similar to our approach, Raissi et al. (2019) uses vector potentials to obtain continuous divergence-free velocity fields to approximate the incompressible Navier-Stokes equations. Continuous methods return smooth, accurate results and can overcome the curse of dimensionality of discrete techniques in high-dimensional PDEs (Grohs et al. (

