no code implementations • 13 Mar 2024 • Lianghao Cao, Thomas O'Leary-Roseberry, Omar Ghattas
Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.
1 code implementation • 31 May 2023 • Dingcheng Luo, Thomas O'Leary-Roseberry, Peng Chen, Omar Ghattas
We propose a novel machine learning framework for solving optimization problems governed by large-scale partial differential equations (PDEs) with high-dimensional random parameters.
no code implementations • 6 Oct 2022 • Lianghao Cao, Thomas O'Leary-Roseberry, Prashant K. Jha, J. Tinsley Oden, Omar Ghattas
We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error, all while retaining substantial computational speedups of posterior sampling when models are governed by highly nonlinear PDEs.
1 code implementation • 21 Jun 2022 • Thomas O'Leary-Roseberry, Peng Chen, Umberto Villa, Omar Ghattas
We propose derivative-informed neural operators (DINOs), a general family of neural networks to approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest.
2 code implementations • 14 Dec 2021 • Thomas O'Leary-Roseberry, Xiaosong Du, Anirban Chaudhuri, Joaquim R. R. A. Martins, Karen Willcox, Omar Ghattas
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs.
1 code implementation • 30 Nov 2020 • Thomas O'Leary-Roseberry, Umberto Villa, Peng Chen, Omar Ghattas
We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively.
2 code implementations • 7 Feb 2020 • Thomas O'Leary-Roseberry, Nick Alger, Omar Ghattas
In this work we motivate the extension of Newton methods to the SA regime, and argue for the use of the scalable low rank saddle free Newton (LRSFN) method, which avoids forming the Hessian in favor of making a low rank approximation.
no code implementations • 7 Feb 2020 • Thomas O'Leary-Roseberry, Omar Ghattas
We show that the nonlinear activation functions used in the network construction play a critical role in classifying stationary points of the loss landscape.
1 code implementation • 16 May 2019 • Thomas O'Leary-Roseberry, Nick Alger, Omar Ghattas
We survey sub-sampled inexact Newton methods and consider their application in non-convex settings.
Optimization and Control Numerical Analysis