Search Results for author: Jacob R. Gardner

Found 29 papers, 14 papers with code

Stochastic Approximation with Biased MCMC for Expectation Maximization

1 code implementation27 Feb 2024 Samuel Gruffaz, Kyurae Kim, Alain Oliviero Durmus, Jacob R. Gardner

In practice, MCMC-SAEM is often run with asymptotically biased MCMC, for which the consequences are theoretically less understood.

Bayesian Inference

Provably Scalable Black-Box Variational Inference with Structured Variational Families

no code implementations19 Jan 2024 Joohwan Ko, Kyurae Kim, Woo Chang Kim, Jacob R. Gardner

In fact, recent computational complexity results for BBVI have established that full-rank variational families scale poorly with the dimensionality of the problem compared to e. g. mean field families.

Variational Inference

Large-Scale Gaussian Processes via Alternating Projection

1 code implementation26 Oct 2023 Kaiwen Wu, Jonathan Wenger, Haydn Jones, Geoff Pleiss, Jacob R. Gardner

Training and inference in Gaussian processes (GPs) require solving linear systems with $n\times n$ kernel matrices.

Gaussian Processes Hyperparameter Optimization

Linear Convergence of Black-Box Variational Inference: Should We Stick the Landing?

no code implementations27 Jul 2023 Kyurae Kim, Yian Ma, Jacob R. Gardner

We prove that black-box variational inference (BBVI) with control variates, particularly the sticking-the-landing (STL) estimator, converges at a geometric (traditionally called "linear") rate under perfect variational family specification.

Variational Inference

The Behavior and Convergence of Local Bayesian Optimization

1 code implementation NeurIPS 2023 Kaiwen Wu, Kyurae Kim, Roman Garnett, Jacob R. Gardner

A recent development in Bayesian optimization is the use of local optimization strategies, which can deliver strong empirical performance on high-dimensional problems compared to traditional global strategies.

Bayesian Optimization

On the Convergence of Black-Box Variational Inference

no code implementations NeurIPS 2023 Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner

We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.

Bayesian Inference Variational Inference

Practical and Matching Gradient Variance Bounds for Black-Box Variational Bayesian Inference

no code implementations18 Mar 2023 Kyurae Kim, Kaiwen Wu, Jisu Oh, Jacob R. Gardner

Understanding the gradient variance of black-box variational inference (BBVI) is a crucial step for establishing its convergence and developing algorithmic improvements.

Bayesian Inference Variational Inference

Local Bayesian optimization via maximizing probability of descent

1 code implementation21 Oct 2022 Quan Nguyen, Kaiwen Wu, Jacob R. Gardner, Roman Garnett

Local optimization presents a promising approach to expensive, high-dimensional black-box optimization by sidestepping the need to globally explore the search space.

Bayesian Optimization Navigate

Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients

1 code implementation13 Jun 2022 Kyurae Kim, Jisu Oh, Jacob R. Gardner, Adji Bousso Dieng, HongSeok Kim

Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior.

Variational Inference

Local Latent Space Bayesian Optimization over Structured Inputs

1 code implementation28 Jan 2022 Natalie Maus, Haydn T. Jones, Juston S. Moore, Matt J. Kusner, John Bradshaw, Jacob R. Gardner

By reformulating the encoder to function as both an encoder for the DAE globally and as a deep kernel for the surrogate model within a trust region, we better align the notion of local optimization in the latent space with local optimization in the input space.

Bayesian Optimization

Scaling Gaussian Processes with Derivative Information Using Variational Inference

no code implementations NeurIPS 2021 Misha Padidar, Xinran Zhu, Leo Huang, Jacob R. Gardner, David Bindel

We demonstrate the full scalability of our approach on a variety of tasks, ranging from a high dimensional stellarator fusion regression task to training graph convolutional neural networks on Pubmed using Bayesian optimization.

Bayesian Optimization Gaussian Processes +2

Preconditioning for Scalable Gaussian Process Hyperparameter Optimization

no code implementations1 Jul 2021 Jonathan Wenger, Geoff Pleiss, Philipp Hennig, John P. Cunningham, Jacob R. Gardner

While preconditioning is well understood in the context of CG, we demonstrate that it can also accelerate convergence and reduce variance of the estimates for the log-determinant and its derivative.

Gaussian Processes Hyperparameter Optimization

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

1 code implementation NeurIPS 2020 Shali Jiang, Daniel R. Jiang, Maximilian Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett

In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree.

Bayesian Optimization Decision Making

Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization

1 code implementation NeurIPS 2020 Geoff Pleiss, Martin Jankowiak, David Eriksson, Anil Damle, Jacob R. Gardner

Matrix square roots and their inverses arise frequently in machine learning, e. g., when sampling from high-dimensional Gaussians $\mathcal{N}(\mathbf 0, \mathbf K)$ or whitening a vector $\mathbf b$ against covariance matrix $\mathbf K$.

Bayesian Optimization Gaussian Processes

Deep Sigma Point Processes

no code implementations21 Feb 2020 Martin Jankowiak, Geoff Pleiss, Jacob R. Gardner

We introduce Deep Sigma Point Processes, a class of parametric models inspired by the compositional structure of Deep Gaussian Processes (DGPs).

Gaussian Processes Point Processes +1

Parametric Gaussian Process Regressors

no code implementations ICML 2020 Martin Jankowiak, Geoff Pleiss, Jacob R. Gardner

In an extensive empirical comparison with a number of alternative methods for scalable GP regression, we find that the resulting predictive distributions exhibit significantly better calibrated uncertainties and higher log likelihoods--often by as much as half a nat per datapoint.

regression Variational Inference

Scalable Global Optimization via Local Bayesian Optimization

2 code implementations NeurIPS 2019 David Eriksson, Michael Pearce, Jacob R. Gardner, Ryan Turner, Matthias Poloczek

This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems.

Bayesian Optimization

Simple Black-box Adversarial Attacks

4 code implementations ICLR 2019 Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, Kilian Q. Weinberger

We propose an intriguingly simple method for the construction of adversarial images in the black-box setting.

GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration

4 code implementations NeurIPS 2018 Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, Andrew Gordon Wilson

Despite advances in scalable models, the inference tools used for Gaussian processes (GPs) have yet to fully capitalize on developments in computing hardware.

Gaussian Processes

Constant-Time Predictive Distributions for Gaussian Processes

1 code implementation ICML 2018 Geoff Pleiss, Jacob R. Gardner, Kilian Q. Weinberger, Andrew Gordon Wilson

One of the most compelling features of Gaussian process (GP) regression is its ability to provide well-calibrated posterior distributions.

Gaussian Processes regression

Product Kernel Interpolation for Scalable Gaussian Processes

1 code implementation24 Feb 2018 Jacob R. Gardner, Geoff Pleiss, Ruihan Wu, Kilian Q. Weinberger, Andrew Gordon Wilson

Recent work shows that inference for Gaussian processes can be performed efficiently using iterative methods that rely only on matrix-vector multiplications (MVMs).

Gaussian Processes

Deep Manifold Traversal: Changing Labels with Convolutional Features

no code implementations19 Nov 2015 Jacob R. Gardner, Paul Upchurch, Matt J. Kusner, Yixuan Li, Kilian Q. Weinberger, Kavita Bala, John E. Hopcroft

Many tasks in computer vision can be cast as a "label changing" problem, where the goal is to make a semantic change to the appearance of an image or some subject in an image in order to alter the class membership.

Compressed Support Vector Machines

no code implementations26 Jan 2015 Zhixiang Xu, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger

For most of the time during which we conducted this research, we were unaware of this prior work.

Differentially Private Bayesian Optimization

no code implementations16 Jan 2015 Matt J. Kusner, Jacob R. Gardner, Roman Garnett, Kilian Q. Weinberger

The success of machine learning has led practitioners in diverse real-world settings to learn classifiers for practical problems.

Bayesian Optimization BIG-bench Machine Learning

Parallel Support Vector Machines in Practice

no code implementations3 Apr 2014 Stephen Tyree, Jacob R. Gardner, Kilian Q. Weinberger, Kunal Agrawal, John Tran

In particular, we provide the first comparison of algorithms with explicit and implicit parallelization.

Cannot find the paper you are looking for? You can Submit a new open access paper.