no code implementations • 24 Feb 2024 • Nikola B. Kovachki, Samuel Lanthaler, Andrew M. Stuart
This review article summarizes recent progress and the current state of our theoretical understanding of neural operators, focusing on an approximation theoretic point of view.
no code implementations • 23 Oct 2023 • Anshuman Pradhan, Kyra H. Adams, Venkat Chandrasekaran, Zhen Liu, John T. Reager, Andrew M. Stuart, Michael J. Turmon
Modeling groundwater levels continuously across California's Central Valley (CV) hydrological system is challenging due to low-quality well data which is sparsely and noisily sampled across time and space.
no code implementations • 28 Jun 2023 • Samuel Lanthaler, Andrew M. Stuart
The first contribution of this paper is to prove that for general classes of operators which are characterized only by their $C^r$- or Lipschitz-regularity, operator learning suffers from a ``curse of parametric complexity'', which is an infinite-dimensional analogue of the well-known curse of dimensionality encountered in high-dimensional approximation problems.
2 code implementations • 21 Jun 2023 • Kaushik Bhattacharya, Nikola Kovachki, Aakila Rajan, Andrew M. Stuart, Margaret Trautner
However, a major challenge in data-driven learning approaches for this problem has remained unexplored: the impact of discontinuities and corner interfaces in the underlying material.
no code implementations • 26 Apr 2023 • Samuel Lanthaler, Zongyi Li, Andrew M. Stuart
A popular variant of neural operators is the Fourier neural operator (FNO).
1 code implementation • 21 Feb 2023 • Yifan Chen, Daniel Zhengyu Huang, Jiaoyang Huang, Sebastian Reich, Andrew M. Stuart
The flow in the Gaussian space may be understood as a Gaussian approximation of the flow.
no code implementations • 9 Aug 2022 • Ziming Liu, Andrew M. Stuart, YiXuan Wang
We propose a sampling method based on an ensemble approximation of second order Langevin dynamics.
no code implementations • 27 Aug 2021 • Maarten V. de Hoop, Nikola B. Kovachki, Nicholas H. Nelsen, Andrew M. Stuart
This paper studies the learning of linear operators between infinite-dimensional Hilbert spaces.
no code implementations • 14 Jul 2021 • Matthew E. Levine, Andrew M. Stuart
For ergodic continuous-time systems, we prove that both excess risk and generalization error are bounded above by terms that diminish with the square-root of T, the time-interval over which training data is specified.
no code implementations • 7 Apr 2021 • Oliver R. A. Dunbar, Andrew B. Duncan, Andrew M. Stuart, Marie-Therese Wolfram
The ensemble Kalman methods are shown to behave favourably in the presence of noise in the parameter-to-data map, whereas Langevin methods are adversely affected.
1 code implementation • 2 Feb 2021 • Daniel Z. Huang, Tapio Schneider, Andrew M. Stuart
In this paper, we work with the ExKI, EKI, and a variant on EKI which we term unscented Kalman inversion (UKI).
Numerical Analysis Numerical Analysis Dynamical Systems
no code implementations • 24 Dec 2020 • Oliver R. A. Dunbar, Alfredo Garbuno-Inigo, Tapio Schneider, Andrew M. Stuart
Here we demonstrate an approach to model calibration and uncertainty quantification that requires only $O(10^2)$ model runs and can accommodate internal climate variability.
Gaussian Processes Statistics Theory Statistics Theory
no code implementations • 25 Jul 2020 • Andrea L. Bertozzi, Bamdad Hosseini, Hao Li, Kevin Miller, Andrew M. Stuart
Graph-based semi-supervised regression (SSR) is the problem of estimating the value of a function on a weighted graph from its values (labels) on a small subset of the vertices.
no code implementations • 22 May 2020 • Yifan Chen, Houman Owhadi, Andrew M. Stuart
The purpose of this paper is to study two paradigms of learning hierarchical parameters: one is from the probabilistic Bayesian perspective, in particular, the empirical Bayes approach that has been largely used in Bayesian statistics; the other is from the deterministic and approximation theoretic view, and in particular the kernel flow algorithm that was proposed recently in the machine learning literature.
1 code implementation • 20 May 2020 • Nicholas H. Nelsen, Andrew M. Stuart
Well known to the machine learning community, the random feature model is a parametric approximation to kernel interpolation or regression methods.
no code implementations • 7 May 2020 • Kaushik Bhattacharya, Bamdad Hosseini, Nikola B. Kovachki, Andrew M. Stuart
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
no code implementations • 13 Sep 2019 • Franca Hoffmann, Bamdad Hosseini, Assad A. Oberai, Andrew M. Stuart
Graph Laplacians computed from weighted adjacency matrices are widely used to identify geometric structure in data, and clusters in particular; their spectral properties play a central role in a number of unsupervised and semi-supervised learning algorithms.
no code implementations • 18 Jun 2019 • Franca Hoffmann, Bamdad Hosseini, Zhi Ren, Andrew M. Stuart
Graph-based semi-supervised learning is the problem of propagating labels from a small number of labelled data points to a larger set of unlabelled data.
no code implementations • 10 Jun 2019 • Nikola B. Kovachki, Andrew M. Stuart
Firstly we show that standard implementations of fixed momentum methods approximate a time-rescaled gradient descent flow, asymptotically as the learning rate shrinks to zero; this result does not distinguish momentum methods from pure gradient descent, in the limit of vanishing learning rate.
no code implementations • 10 Aug 2018 • Nikola B. Kovachki, Andrew M. Stuart
The standard probabilistic perspective on machine learning gives rise to empirical risk-minimization tasks that are frequently solved by stochastic gradient descent (SGD) and variants thereof.
no code implementations • 23 May 2018 • Matthew M. Dunlop, Dejan Slepčev, Andrew M. Stuart, Matthew Thorpe
Scalings in which the graph Laplacian approaches a differential operator in the large graph limit are used to develop understanding of a number of algorithms for semi-supervised learning; in particular the extension, to this graph setting, of the probit algorithm, level set and kriging methods, are studied.
no code implementations • 9 Mar 2018 • Victor Chen, Matthew M. Dunlop, Omiros Papaspiliopoulos, Andrew M. Stuart
One popular formulation of such problems is as Bayesian inverse problems, where a prior distribution is used to regularize inference on a high-dimensional latent state, typically a function or a field.
no code implementations • 26 Mar 2017 • Andrea L. Bertozzi, Xiyang Luo, Andrew M. Stuart, Konstantinos C. Zygalakis
In this paper we introduce, develop algorithms for, and investigate the properties of, a variety of Bayesian models for the task of binary classification; via the posterior distribution on the classification labels, these methods automatically give measures of uncertainty.
1 code implementation • 7 Mar 2016 • Andrew M. Stuart, Aretha L. Teckentrup
We prove error bounds on the Hellinger distance between the true posterior distribution and various approximations based on the Gaussian process emulator.
Numerical Analysis