Search Results for author: Chris J. Oates

Found 6 papers, 3 papers with code

Meta-learning Control Variates: Variance Reduction with Limited Data

1 code implementation8 Mar 2023 Zhuo Sun, Chris J. Oates, François-Xavier Briol

Control variates can be a powerful tool to reduce the variance of Monte Carlo estimators, but constructing effective control variates can be challenging when the number of samples is small.

Meta-Learning

Maximum Likelihood Estimation in Gaussian Process Regression is Ill-Posed

no code implementations17 Mar 2022 Toni Karvonen, Chris J. Oates

Gaussian process regression underpins countless academic and industrial applications of machine learning and statistics, with maximum likelihood estimation routinely used to select appropriate parameters for the covariance kernel.

regression

Testing whether a Learning Procedure is Calibrated

no code implementations23 Dec 2020 Jon Cockayne, Matthew M. Graham, Chris J. Oates, T. J. Sullivan

A learning procedure takes as input a dataset and performs inference for the parameters $\theta$ of a model that is assumed to have given rise to the dataset.

Bayesian Inference Statistics Theory Statistics Theory

Probabilistic Iterative Methods for Linear Systems

no code implementations23 Dec 2020 Jon Cockayne, Ilse C. F. Ipsen, Chris J. Oates, Tim W. Reid

This paper presents a probabilistic perspective on iterative methods for approximating the solution $\mathbf{x}_* \in \mathbb{R}^d$ of a nonsingular linear system $\mathbf{A} \mathbf{x}_* = \mathbf{b}$.

Uncertainty Quantification

Measure Transport with Kernel Stein Discrepancy

1 code implementation22 Oct 2020 Matthew A. Fisher, Tui Nolan, Matthew M. Graham, Dennis Prangle, Chris J. Oates

Measure transport underpins several recent algorithms for posterior approximation in the Bayesian context, wherein a transport map is sought to minimise the Kullback--Leibler divergence (KLD) from the posterior to the approximation.

The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks

1 code implementation16 Oct 2020 Takuo Matsubara, Chris J. Oates, François-Xavier Briol

Our approach constructs a prior distribution for the parameters of the network, called a ridgelet prior, that approximates the posited Gaussian process in the output space of the network.

Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.