no code implementations • 24 Feb 2024 • Nikola B. Kovachki, Samuel Lanthaler, Andrew M. Stuart
This review article summarizes recent progress and the current state of our theoretical understanding of neural operators, focusing on an approximation theoretic point of view.
no code implementations • 14 Feb 2023 • Jae Hyun Lim, Nikola B. Kovachki, Ricardo Baptista, Christopher Beckham, Kamyar Azizzadenesheli, Jean Kossaifi, Vikram Voleti, Jiaming Song, Karsten Kreis, Jan Kautz, Christopher Pal, Arash Vahdat, Anima Anandkumar
They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising.
no code implementations • 27 Aug 2021 • Maarten V. de Hoop, Nikola B. Kovachki, Nicholas H. Nelsen, Andrew M. Stuart
This paper studies the learning of linear operators between infinite-dimensional Hilbert spaces.
1 code implementation • 11 Jun 2020 • Ricardo Baptista, Bamdad Hosseini, Nikola B. Kovachki, Youssef Marzouk
We present a novel framework for conditional sampling of probability measures, using block triangular transport maps.
no code implementations • 7 May 2020 • Kaushik Bhattacharya, Bamdad Hosseini, Nikola B. Kovachki, Andrew M. Stuart
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
no code implementations • 4 Sep 2019 • Lixue Cheng, Nikola B. Kovachki, Matthew Welborn, Thomas F. Miller III
Machine learning (ML) in the representation of molecular-orbital-based (MOB) features has been shown to be an accurate and transferable approach to the prediction of post-Hartree-Fock correlation energies.
no code implementations • 10 Jun 2019 • Nikola B. Kovachki, Andrew M. Stuart
Firstly we show that standard implementations of fixed momentum methods approximate a time-rescaled gradient descent flow, asymptotically as the learning rate shrinks to zero; this result does not distinguish momentum methods from pure gradient descent, in the limit of vanishing learning rate.
no code implementations • 10 Aug 2018 • Nikola B. Kovachki, Andrew M. Stuart
The standard probabilistic perspective on machine learning gives rise to empirical risk-minimization tasks that are frequently solved by stochastic gradient descent (SGD) and variants thereof.