Search Results for author: Shreyas Padhy

Found 12 papers, 8 papers with code

A Generative Model of Symmetry Transformations

no code implementations4 Mar 2024 James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato

Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.

Transport meets Variational Inference: Controlled Monte Carlo Diffusions

1 code implementation3 Jul 2023 Francisco Vargas, Shreyas Padhy, Denis Blessing, Nikolas Nüsken

Connecting optimal transport and variational inference, we present a principled and systematic framework for sampling and generative modelling centred around divergences on path space.

Variational Inference

Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

1 code implementation NeurIPS 2023 Jihao Andreas Lin, Javier Antorán, Shreyas Padhy, David Janz, José Miguel Hernández-Lobato, Alexander Terenin

Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems.

Bayesian Optimization Decision Making +1

Kernel Regression with Infinite-Width Neural Networks on Millions of Examples

no code implementations9 Mar 2023 Ben Adlam, Jaehoon Lee, Shreyas Padhy, Zachary Nado, Jasper Snoek

Using this approach, we study scaling laws of several neural kernels across many orders of magnitude for the CIFAR-5m dataset.

Data Augmentation regression

Sampling-based inference for large linear models, with application to linearised Laplace

1 code implementation10 Oct 2022 Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato

Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.

Bayesian Inference Uncertainty Quantification

A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness

2 code implementations1 May 2022 Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan

The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles.

Data Augmentation Probabilistic Deep Learning +1

A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection

3 code implementations16 Jun 2021 Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, Balaji Lakshminarayanan

Mahalanobis distance (MD) is a simple and popular post-processing method for detecting out-of-distribution (OOD) inputs in neural networks.

Intent Detection Out-of-Distribution Detection +1

Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift

no code implementations19 Jun 2020 Zachary Nado, Shreyas Padhy, D. Sculley, Alexander D'Amour, Balaji Lakshminarayanan, Jasper Snoek

Using this one line code change, we achieve state-of-the-art on recent covariate shift benchmarks and an mCE of 60. 28\% on the challenging ImageNet-C dataset; to our knowledge, this is the best result for any model that does not incorporate additional data augmentation or modification of the training pipeline.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.