1 code implementation • 24 Feb 2022 • Matthew Ashman, Thang D. Bui, Cuong V. Nguyen, Stratis Markou, Adrian Weller, Siddharth Swaroop, Richard E. Turner
Variational inference (VI) has become the method of choice for fitting many modern probabilistic models.
1 code implementation • 9 Jun 2020 • Sanyam Kapoor, Theofanis Karaletsos, Thang D. Bui
Through sequential construction of posteriors on observing data online, Bayes' theorem provides a natural framework for continual learning.
no code implementations • NeurIPS 2020 • Theofanis Karaletsos, Thang D. Bui
Probabilistic neural networks are typically modeled with independent weight priors, which do not capture weight correlations in the prior and do not provide a parsimonious interface to express properties in function space.
no code implementations • pproximateinference AABI Symposium 2019 • Theofanis Karaletsos, Thang D. Bui
Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.
1 code implementation • 6 May 2019 • Siddharth Swaroop, Cuong V. Nguyen, Thang D. Bui, Richard E. Turner
In the continual learning setting, tasks are encountered sequentially.
no code implementations • 27 Nov 2018 • Thang D. Bui, Cuong V. Nguyen, Siddharth Swaroop, Richard E. Turner
Second, the granularity of the updates e. g. whether the updates are local to each data point and employ message passing or global.
8 code implementations • ICLR 2018 • Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks.
3 code implementations • NeurIPS 2017 • Thang D. Bui, Cuong V. Nguyen, Richard E. Turner
Sparse pseudo-point approximations for Gaussian process (GP) models provide a suite of methods that support deployment of GPs in the large data regime and enable analytic intractabilities to be sidestepped.
no code implementations • 14 Mar 2017 • Thang D. Bui, Sujith Ravi, Vivek Ramavajjala
In this work, we propose a training framework with a graph-regularised objective, namely "Neural Graph Machines", that can combine the power of neural networks and label propagation.
1 code implementation • 23 May 2016 • Thang D. Bui, Josiah Yan, Richard E. Turner
Unlike much of the previous venerable work in this area, the new framework is built on standard methods for approximate inference (variational free-energy, EP and Power EP methods) rather than employing approximations to the probabilistic generative model itself.
no code implementations • 12 Feb 2016 • Thang D. Bui, Daniel Hernández-Lobato, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
no code implementations • NeurIPS 2015 • Felipe Tobar, Thang D. Bui, Richard E. Turner
We introduce the Gaussian Process Convolution Model (GPCM), a two-stage nonparametric generative procedure to model stationary signals as the convolution between a continuous-time white-noise process and a continuous-time linear filter drawn from Gaussian process.
no code implementations • 11 Nov 2015 • Thang D. Bui, José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Richard E. Turner
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
no code implementations • NeurIPS 2014 • Thang D. Bui, Richard E. Turner
Gaussian process regression can be accelerated by constructing a small pseudo-dataset to summarise the observed data.