H.B. Keller Colloquium
Standard neural networks assume finite-dimensional inputs and outputs, and hence, are unsuitable for modeling phenomena such as those arising from the solutions of Partial Differential Equations (PDE). We introduce neural operators that can learn operators, which are mappings between infinite dimensional spaces. By framing neural operators as non-linear compositions of kernel integrations, we establish that they can universally approximate any operator. They are independent of the resolution or grid of training data and allow for zero-shot generalization to higher resolution evaluations. We find that the Fourier neural operator can solve turbulent fluid flows with a 1000x speedup compared to numerical solvers. I will outline several applications where neural operator has shown orders of magnitude speedup.