SOBOLEV TRAINING FOR THE NEURAL NETWORK SO-LUTIONS OF PDES

Abstract

Approximating the numerical solutions of partial differential equations (PDEs) using neural networks is a promising application of deep learning. The smooth architecture of a fully connected neural network is appropriate for finding the solutions of PDEs; the corresponding loss function can also be intuitively designed and guarantees the convergence for various kinds of PDEs. However, the rate of convergence has been considered as a weakness of this approach. This paper introduces a novel loss function for the training of neural networks to find the solutions of PDEs, making the training substantially efficient. Inspired by the recent studies that incorporate derivative information for the training of neural networks, we develop a loss function that guides a neural network to reduce the error in the corresponding Sobolev space. Surprisingly, a simple modification of the loss function can make the training process similar to Sobolev Training although solving PDEs with neural networks is not a fully supervised learning task. We provide several theoretical justifications for such an approach for the viscous Burgers equation and the kinetic Fokker-Planck equation. We also present several simulation results, which show that compared with the traditional L 2 loss function, the proposed loss function guides the neural network to a significantly faster convergence. Moreover, we provide the empirical evidence that shows that the proposed loss function, together with the iterative sampling techniques, performs better in solving high dimensional PDEs.

1. INTRODUCTION

Deep learning has achieved remarkable success in many scientific fields, including computer vision and natural language processing. In addition to engineering, deep learning has been successfully applied to the field of scientific computing. Particularly, the use of neural networks for the numerical integration of partial differential equations (PDEs) has emerged as a new important application of the deep learning. Being a universal approximator (Cybenko, 1989; Hornik et al., 1989; Li, 1996) , a neural network can approximate solutions of complex PDEs. To find the neural network solution of a PDE, a neural network is trained on a domain wherein the PDE is defined. Training a neural network comprises the following: feeding the input data through forward pass and minimizing a predefined loss function with respect to the network parameters through backward pass. In the traditional supervised learning setting, the loss function is designed to guide the neural network to generate the same output as the target data for the given input data. However, while solving PDEs using neural networks, the target values that correspond to the analytic solution are not available. One possible way to guide the neural network to produce the same output as the solution of the PDE is to penalize the neural network to satisfy the PDE itself (Sirignano & Spiliopoulos, 2018; Berg & Nyström, 2018; Raissi et al., 2019; Hwang et al., 2020) . Unlike the traditional mesh-based schemes including the finite difference method (FDM) and the finite element method (FEM), neural networks are inherently mesh-free function-approximators. Advantageously, as mesh-free function-approximators, neural networks can avoid the curse of dimensionality (Sirignano & Spiliopoulos, 2018) and approximate the solutions of PDEs on complex geometries (Berg & Nyström, 2018) . Recently, Hwang et al. (2020) showed that neural networks could approximate the solutions of kinetic Fokker-Planck equations under not only various kinds 1

