Search Results for author: Andrew Y. K. Foong

Found 13 papers, 10 papers with code

Denoising Diffusion Probabilistic Models in Six Simple Steps

no code implementations6 Feb 2024 Richard E. Turner, Cristiana-Diana Diaconu, Stratis Markou, Aliaksandra Shysheya, Andrew Y. K. Foong, Bruno Mlodozeniec

Denoising Diffusion Probabilistic Models (DDPMs) are a very popular class of deep generative model that have been successfully applied to a diverse range of problems including image and video generation, protein and material synthesis, weather forecasting, and neural surrogates of partial differential equations.

Denoising Video Generation +1

Autoregressive Conditional Neural Processes

1 code implementation25 Mar 2023 Wessel P. Bruinsma, Stratis Markou, James Requiema, Andrew Y. K. Foong, Tom R. Andersson, Anna Vaughan, Anthony Buonomo, J. Scott Hosking, Richard E. Turner

Our work provides an example of how ideas from neural distribution estimation can benefit neural processes, and motivates research into the AR deployment of other neural process models.

Meta-Learning

Timewarp: Transferable Acceleration of Molecular Dynamics by Learning Time-Coarsened Dynamics

1 code implementation NeurIPS 2023 Leon Klein, Andrew Y. K. Foong, Tor Erlend Fjelde, Bruno Mlodozeniec, Marc Brockschmidt, Sebastian Nowozin, Frank Noé, Ryota Tomioka

Molecular dynamics (MD) simulation is a widely used technique to simulate molecular systems, most commonly at the all-atom resolution where equations of motion are integrated with timesteps on the order of femtoseconds ($1\textrm{fs}=10^{-15}\textrm{s}$).

A Note on the Chernoff Bound for Random Variables in the Unit Interval

no code implementations15 May 2022 Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt

The Chernoff bound is a well-known tool for obtaining a high probability bound on the expectation of a Bernoulli random variable in terms of its sample average.

Learning Theory

How Tight Can PAC-Bayes be in the Small Data Regime?

1 code implementation NeurIPS 2021 Andrew Y. K. Foong, Wessel P. Bruinsma, David R. Burt, Richard E. Turner

Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.

The Gaussian Neural Process

1 code implementation pproximateinference AABI Symposium 2021 Wessel P. Bruinsma, James Requeima, Andrew Y. K. Foong, Jonathan Gordon, Richard E. Turner

Neural Processes (NPs; Garnelo et al., 2018a, b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes.

Meta-Learning Translation

Structured Weight Priors for Convolutional Neural Networks

1 code implementation12 Jul 2020 Tim Pearce, Andrew Y. K. Foong, Alexandra Brintrup

This paper explores the benefits of adding structure to weight priors.

Convolutional Conditional Neural Processes

3 code implementations ICLR 2020 Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner

We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.

Inductive Bias Time Series +3

On the Expressiveness of Approximate Inference in Bayesian Neural Networks

2 code implementations NeurIPS 2020 Andrew Y. K. Foong, David R. Burt, Yingzhen Li, Richard E. Turner

While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood.

Active Learning Bayesian Inference +3

'In-Between' Uncertainty in Bayesian Neural Networks

no code implementations27 Jun 2019 Andrew Y. K. Foong, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner

We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks.

Active Learning Bayesian Optimisation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.