Search Results for author: Samuel Stanton

Found 11 papers, 10 papers with code

Bayesian Optimization with Conformal Prediction Sets

1 code implementation22 Oct 2022 Samuel Stanton, Wesley Maddox, Andrew Gordon Wilson

Bayesian optimization is a coherent, ubiquitous approach to decision-making under uncertainty, with applications including multi-arm bandits, active learning, and black-box optimization.

Active Learning Bayesian Optimization +5

Deconstructing the Inductive Biases of Hamiltonian Neural Networks

1 code implementation ICLR 2022 Nate Gruver, Marc Finzi, Samuel Stanton, Andrew Gordon Wilson

Physics-inspired neural networks (NNs), such as Hamiltonian or Lagrangian NNs, dramatically outperform other learned dynamics models by leveraging strong inductive biases.

Conditioning Sparse Variational Gaussian Processes for Online Decision-making

1 code implementation NeurIPS 2021 Wesley J. Maddox, Samuel Stanton, Andrew Gordon Wilson

With a principled representation of uncertainty and closed form posterior updates, Gaussian processes (GPs) are a natural choice for online decision making.

Active Learning Decision Making +1

Does Knowledge Distillation Really Work?

2 code implementations NeurIPS 2021 Samuel Stanton, Pavel Izmailov, Polina Kirichenko, Alexander A. Alemi, Andrew Gordon Wilson

Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks.

Knowledge Distillation

Kernel Interpolation for Scalable Online Gaussian Processes

2 code implementations2 Mar 2021 Samuel Stanton, Wesley J. Maddox, Ian Delbridge, Andrew Gordon Wilson

Gaussian processes (GPs) provide a gold standard for performance in online settings, such as sample-efficient control and black box optimization, where we need to update a posterior distribution as we acquire data in a sequential fashion.

Bayesian Optimization Gaussian Processes

On the model-based stochastic value gradient for continuous reinforcement learning

1 code implementation28 Aug 2020 Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson

For over a decade, model-based reinforcement learning has been seen as a way to leverage control-based domain knowledge to improve the sample-efficiency of reinforcement learning agents.

Continuous Control Humanoid Control +4

Cannot find the paper you are looking for? You can Submit a new open access paper.