BREAKING THE CURSE OF DIMENSIONALITY FOR PARAMETRIC ELLIPTIC PDES

Abstract

Motivated by recent empirical success, we examine how neural network-based ansatz classes can break the curse of dimensionality for high-dimensional, nonlinear elliptic partial differential equations (PDEs) with variational structure. The high-dimensionality of the PDEs can either be induced through a high-dimensional physical domain or a high-dimensional parameter space. The latter include parametric right-hand sides, parametric domains, and material constants. Our main result shows that any scheme that computes neural network based W 1,p -approximations, leverages the extraordinary approximation capabilities of neural networks and, thus, is able to beat the curse of dimensionality if the ground truth solution is smooth or possesses Barron regularity. Popular examples of W 1,p -convergent schemes include, e.g., the Deep Ritz Method and physics-informed neural networks. We present numerical experiments supporting our theoretical findings.

1. INTRODUCTION

High-dimensional partial differential equations (PDEs) arise naturally in applications with either a high-dimensional domain, a high-dimensional parameter space, or possibly with both. The former includes the Schrödinger equation in quantum physics, the Black-Scholes equation in finance, and the Hamilton-Jacobi-Bellman equation in control theory, we refer to Weinan et al. (2021); Bellman (1954) . On the other hand, examples of problems with high-dimensional parameter space are ubiquitous in engineering applications, for instance, in varying material properties, right-hand sides or even in the form of varying computational domains, as discussed in Hennigh et al. (2021); Ohlberger & Rave (2016) . For problems with a high-dimensional physical domain, classical mesh-based approximation schemes face the curse of dimensionality, meaning that the computational cost increases exponentially with the dimension of the problem. In the case of parametric problems, one is typically interested in querying the PDE solution for many different parameter instances, possibly with low inference time. To this end, classical methods need to repeatedly solve the equations for every required parameter instance, a potentially prohibitively expensive or slow computational task, see Biegler et al. (2007) . Even assuming additional, favorable structure of the solution of a high-dimensional PDE -may it be a latent low-dimensionality of the solution or a high degree of smoothness -it remains a challenge for classical methods to approximate the solution with an acceptable accuracy, especially in situations of non-linear solution manifolds as discussed in Ohlberger & Rave (2016); Lee & Carlberg (2020). Artificial neural networks have shown great potential in the approximation of high-dimensional functions, among those computer vision, classification and natural language processing tasks and are known to possess extraordinary approximation capabilities with the possibility to achieve dimensionindependent approximation rates for certain function classes, see Ma et al. (2022) ; Barron (1993) ; Yarotsky ( 2017 



); Gühring & Raslan (2021); Gühring et al. (2020). Therefore, investigating artificial neural networks as ansatz classes for the solution of PDEs or PDE solution operators has recently gained increased interest for high-dimensional and parametric problems. We refer to Kutyniok et al. (2022); Weinan & Wojtowytsch (2022); Jentzen et al. (2021); Chen et al. (2021) for theoretical studies. Successful empirical results of neural network-based applications to PDEs posed in high-dimensional spaces include Hermann et al. (2020); Yu & E (2018); Han et al. (2018); Sirignano & Spiliopoulos (2018). For the parametric setting, we direct the reader to Li et al. (2021); Khoo et al. (2021); Lee & Carlberg (2020); Geist et al. (2021).

