Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs

Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar; 24(89):1−97, 2023.

Abstract

The traditional development of neural networks has predominantly focused on acquiring knowledge of mappings between finite dimensional Euclidean spaces or finite sets. In this study, we present a generalization of neural networks, called neural operators, that are designed to learn operators mapping between infinite dimensional function spaces. Our neural operator is formulated as a composition of linear integral operators and nonlinear activation functions. We have proven a universal approximation theorem for our proposed neural operator, demonstrating its capability to approximate any given nonlinear continuous operator. The neural operators we propose are also discretization-invariant, meaning they share the same model parameters across different discretizations of the underlying function spaces. Additionally, we introduce four classes of efficient parameterization, namely graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. One important application of neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We examine standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and demonstrate that the proposed neural operators outperform existing machine learning-based methodologies while being significantly faster than conventional PDE solvers.

[abs]

[pdf][bib]

[code]