Learning Partial Differential Equations in Reproducing Kernel Hilbert Spaces

George Stepaniants; 24(86):1−72, 2023.

Abstract

A new approach is proposed for learning the fundamental solutions (Green’s functions) of various linear partial differential equations (PDEs) using sample pairs of input-output functions. This data-driven method builds on the theory of functional linear regression (FLR) and estimates the best-fit Green’s function and bias term of the fundamental solution in a reproducing kernel Hilbert space (RKHS). The RKHS allows for regularization of smoothness and the imposition of structural constraints. A general representer theorem for operator RKHSs is derived to approximate the original infinite-dimensional regression problem with a finite-dimensional one, reducing the search space to a parametric class of Green’s functions. The prediction error of the Green’s function estimator is studied by extending prior results on FLR with scalar outputs to the case with functional outputs. The method is demonstrated on several linear PDEs, including the Poisson, Helmholtz, Schrödinger, Fokker-Planck, and heat equation. The robustness to noise and the ability to generalize to new data with varying degrees of smoothness and mesh discretization without additional training are highlighted.

[abs]

[pdf][bib]
      
[code]