no code implementations • 7 Jul 2022 • Ambrish Rawat, James Requeima, Wessel Bruinsma, Richard Turner
Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model.
1 code implementation • NeurIPS 2021 • Andrew Foong, Wessel Bruinsma, David Burt, Richard Turner
Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior.
no code implementations • ICLR 2022 • Stratis Markou, James Requeima, Wessel Bruinsma, Anna Vaughan, Richard E Turner
Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.
no code implementations • 22 Aug 2021 • Stratis Markou, James Requeima, Wessel Bruinsma, Richard Turner
Conditional Neural Processes (CNP; Garnelo et al., 2018) are an attractive family of meta-learning models which produce well-calibrated predictions, enable fast inference at test time, and are trainable via a simple maximum likelihood procedure.
no code implementations • pproximateinference AABI Symposium 2021 • Rui Xia, Wessel Bruinsma, William Tebbutt, Richard E Turner
Many real-world prediction problems involve modelling the dependencies between multiple different outputs across the input space.
no code implementations • pproximateinference AABI Symposium 2019 • Pavel Berkovich, Eric Perim, Wessel Bruinsma
A simple and widely adopted approach to extend Gaussian processes (GPs) to multiple outputs is to model each output as a linear combination of a collection of shared, unobserved latent GPs.
no code implementations • 22 Feb 2018 • Wessel Bruinsma, Richard E. Turner
We present the Causal Gaussian Process Convolution Model (CGPCM), a doubly nonparametric model for causal, spectrally complex dynamical phenomena.
1 code implementation • 20 Feb 2018 • James Requeima, Will Tebbutt, Wessel Bruinsma, Richard E. Turner
Multi-output regression models must exploit dependencies between outputs to maximise predictive performance.