Search Results for author: Michael Hutchinson

Found 9 papers, 5 papers with code

Target Score Matching

no code implementations13 Feb 2024 Valentin De Bortoli, Michael Hutchinson, Peter Wirnsberger, Arnaud Doucet

Denoising Score Matching estimates the score of a noised version of a target distribution by minimizing a regression loss and is widely used to train the popular class of Denoising Diffusion Models.

Denoising regression

Diffusion Models for Constrained Domains

1 code implementation11 Apr 2023 Nic Fishman, Leo Klarner, Valentin De Bortoli, Emile Mathieu, Michael Hutchinson

Denoising diffusion models are a novel class of generative algorithms that achieve state-of-the-art performance across a range of domains, including image generation and text-to-image tasks.

Denoising Image Generation +2

Spectral Diffusion Processes

no code implementations28 Sep 2022 Angus Phillips, Thomas Seror, Michael Hutchinson, Valentin De Bortoli, Arnaud Doucet, Emile Mathieu

Score-based generative modelling (SGM) has proven to be a very effective method for modelling densities on finite-dimensional spaces.

Dimensionality Reduction

Riemannian Diffusion Schrödinger Bridge

no code implementations7 Jul 2022 James Thornton, Michael Hutchinson, Emile Mathieu, Valentin De Bortoli, Yee Whye Teh, Arnaud Doucet

Our proposed method generalizes Diffusion Schr\"odinger Bridge introduced in \cite{debortoli2021neurips} to the non-Euclidean setting and extends Riemannian score-based models beyond the first time reversal.

Density Estimation

Riemannian Score-Based Generative Modelling

2 code implementations6 Feb 2022 Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet

Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance.

Denoising

Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels

no code implementations NeurIPS 2021 Michael Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Whye Teh, Marc Peter Deisenroth

Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems.

BIG-bench Machine Learning Decision Making +2

LieTransformer: Equivariant self-attention for Lie Groups

1 code implementation20 Dec 2020 Michael Hutchinson, Charline Le Lan, Sheheryar Zaidi, Emilien Dupont, Yee Whye Teh, Hyunjik Kim

Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing.

regression

Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes

1 code implementation25 Nov 2020 Peter Holderrieth, Michael Hutchinson, Yee Whye Teh

Motivated by objects such as electric fields or fluid streams, we study the problem of learning stochastic fields, i. e. stochastic processes whose samples are fields like those occurring in physics and engineering.

Gaussian Processes Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.