Search Results for author: James Requeima

Found 14 papers, 8 papers with code

Diffusion-Augmented Neural Processes

no code implementations16 Nov 2023 Lorenzo Bonito, James Requeima, Aliaksandra Shysheya, Richard E. Turner

Over the last few years, Neural Processes have become a useful modelling tool in many application areas, such as healthcare and climate sciences, in which data are scarce and prediction uncertainty estimates are indispensable.

Sim2Real for Environmental Neural Processes

1 code implementation30 Oct 2023 Jonas Scholz, Tom R. Andersson, Anna Vaughan, James Requeima, Richard E. Turner

On held-out weather stations, Sim2Real training substantially outperforms the same model architecture trained only with reanalysis data or only with station data, showing that reanalysis data can serve as a stepping stone for learning from real observations.

Challenges and Pitfalls of Bayesian Unlearning

no code implementations7 Jul 2022 Ambrish Rawat, James Requeima, Wessel Bruinsma, Richard Turner

Machine unlearning refers to the task of removing a subset of training data, thereby removing its contributions to a trained model.

Machine Unlearning Variational Inference

Practical Conditional Neural Processes Via Tractable Dependent Predictions

no code implementations16 Mar 2022 Stratis Markou, James Requeima, Wessel P. Bruinsma, Anna Vaughan, Richard E. Turner

Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018b) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.

Decision Making Meta-Learning

Practical Conditional Neural Process Via Tractable Dependent Predictions

no code implementations ICLR 2022 Stratis Markou, James Requeima, Wessel Bruinsma, Anna Vaughan, Richard E Turner

Existing approaches which model output dependencies, such as Neural Processes (NPs; Garnelo et al., 2018) or the FullConvGNP (Bruinsma et al., 2021), are either complicated to train or prohibitively expensive.

Decision Making Meta-Learning

Efficient Gaussian Neural Processes for Regression

no code implementations22 Aug 2021 Stratis Markou, James Requeima, Wessel Bruinsma, Richard Turner

Conditional Neural Processes (CNP; Garnelo et al., 2018) are an attractive family of meta-learning models which produce well-calibrated predictions, enable fast inference at test time, and are trainable via a simple maximum likelihood procedure.

Decision Making Meta-Learning +1

The Gaussian Neural Process

1 code implementation pproximateinference AABI Symposium 2021 Wessel P. Bruinsma, James Requeima, Andrew Y. K. Foong, Jonathan Gordon, Richard E. Turner

Neural Processes (NPs; Garnelo et al., 2018a, b) are a rich class of models for meta-learning that map data sets directly to predictive stochastic processes.

Meta-Learning Translation

TaskNorm: Rethinking Batch Normalization for Meta-Learning

2 code implementations ICML 2020 John Bronskill, Jonathan Gordon, James Requeima, Sebastian Nowozin, Richard E. Turner

Modern meta-learning approaches for image classification rely on increasingly deep networks to achieve state-of-the-art performance, making batch normalization an essential component of meta-learning pipelines.

General Classification Image Classification +1

Convolutional Conditional Neural Processes

3 code implementations ICLR 2020 Jonathan Gordon, Wessel P. Bruinsma, Andrew Y. K. Foong, James Requeima, Yann Dubois, Richard E. Turner

We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data.

Inductive Bias Time Series +3

Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

1 code implementation NeurIPS 2019 James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, Richard E. Turner

We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature.

Active Learning Continual Learning +4

The Gaussian Process Autoregressive Regression Model (GPAR)

1 code implementation20 Feb 2018 James Requeima, Will Tebbutt, Wessel Bruinsma, Richard E. Turner

Multi-output regression models must exploit dependencies between outputs to maximise predictive performance.

Gaussian Processes regression

Cannot find the paper you are looking for? You can Submit a new open access paper.