Search Results for author: Nikola B. Kovachki

Found 8 papers, 1 papers with code

Operator Learning: Algorithms and Analysis

no code implementations24 Feb 2024 Nikola B. Kovachki, Samuel Lanthaler, Andrew M. Stuart

This review article summarizes recent progress and the current state of our theoretical understanding of neural operators, focusing on an approximation theoretic point of view.

Model Discovery Operator learning

Score-based Diffusion Models in Function Space

no code implementations14 Feb 2023 Jae Hyun Lim, Nikola B. Kovachki, Ricardo Baptista, Christopher Beckham, Kamyar Azizzadenesheli, Jean Kossaifi, Vikram Voleti, Jiaming Song, Karsten Kreis, Jan Kautz, Christopher Pal, Arash Vahdat, Anima Anandkumar

They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising.

Denoising

Conditional Sampling with Monotone GANs: from Generative Models to Likelihood-Free Inference

1 code implementation11 Jun 2020 Ricardo Baptista, Bamdad Hosseini, Nikola B. Kovachki, Youssef Marzouk

We present a novel framework for conditional sampling of probability measures, using block triangular transport maps.

Model Reduction and Neural Networks for Parametric PDEs

no code implementations7 May 2020 Kaushik Bhattacharya, Bamdad Hosseini, Nikola B. Kovachki, Andrew M. Stuart

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces.

Regression-clustering for Improved Accuracy and Training Cost with Molecular-Orbital-Based Machine Learning

no code implementations4 Sep 2019 Lixue Cheng, Nikola B. Kovachki, Matthew Welborn, Thomas F. Miller III

Machine learning (ML) in the representation of molecular-orbital-based (MOB) features has been shown to be an accurate and transferable approach to the prediction of post-Hartree-Fock correlation energies.

BIG-bench Machine Learning Clustering +2

Continuous Time Analysis of Momentum Methods

no code implementations10 Jun 2019 Nikola B. Kovachki, Andrew M. Stuart

Firstly we show that standard implementations of fixed momentum methods approximate a time-rescaled gradient descent flow, asymptotically as the learning rate shrinks to zero; this result does not distinguish momentum methods from pure gradient descent, in the limit of vanishing learning rate.

Ensemble Kalman Inversion: A Derivative-Free Technique For Machine Learning Tasks

no code implementations10 Aug 2018 Nikola B. Kovachki, Andrew M. Stuart

The standard probabilistic perspective on machine learning gives rise to empirical risk-minimization tasks that are frequently solved by stochastic gradient descent (SGD) and variants thereof.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.