1 code implementation • NeurIPS 2023 • Jean Kaddour, Oscar Key, Piotr Nawrot, Pasquale Minervini, Matt J. Kusner
The computation necessary for training Transformer-based language models has skyrocketed in recent years.
1 code implementation • 27 Jan 2023 • Ayush Bharti, Masha Naslidnyk, Oscar Key, Samuel Kaski, François-Xavier Briol
Likelihood-free inference methods typically make use of a distance between simulated and real data.
no code implementations • 15 Sep 2022 • Mingtian Zhang, Oscar Key, Peter Hayes, David Barber, Brooks Paige, François-Xavier Briol
Score-based divergences have been widely used in machine learning and statistics applications.
1 code implementation • 19 Nov 2021 • Oscar Key, Arthur Gretton, François-Xavier Briol, Tamara Fernandez
Model misspecification can create significant challenges for the implementation of probabilistic models, and this has led to development of a range of robust methods which directly account for this issue.
1 code implementation • 16 Mar 2021 • Lisa Schut, Oscar Key, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Medb Corcoran, Yarin Gal
Counterfactual explanations (CEs) are a practical tool for demonstrating why machine learning classifiers make particular decisions.
2 code implementations • 22 Feb 2021 • Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal
Inducing point Gaussian process approximations are often considered a gold standard in uncertainty estimation since they retain many of the properties of the exact GP and scale to large datasets.
no code implementations • 1 Jan 2021 • Joost van Amersfoort, Lewis Smith, Andrew Jesson, Oscar Key, Yarin Gal
Building on recent advances in uncertainty quantification using a single deep deterministic model (DUQ), we introduce variational Deterministic Uncertainty Quantification (vDUQ).
1 code implementation • 1 Nov 2020 • Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth
We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues.
1 code implementation • 8 Oct 2020 • Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal
Motivated by poor resource utilisation in the global setting and poor task performance in the local setting, we introduce a class of intermediary strategies between local and global learning referred to as interlocking backpropagation.