Gaussian Processes

568 papers with code • 1 benchmarks • 5 datasets

Gaussian Processes is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input.

Source: Sequential Randomized Matrix Factorization for Gaussian Processes: Efficient Predictions and Hyper-parameter Optimization

Libraries

Use these libraries to find Gaussian Processes models and implementations

Subtasks


Most implemented papers

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

yaringal/DropoutUncertaintyExps 6 Jun 2015

In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost.

Conditional Neural Processes

deepmind/neural-processes ICML 2018

Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function.

Gaussian Processes for Big Data

cornellius-gp/gpytorch 26 Sep 2013

We introduce stochastic variational inference for Gaussian process models.

Doubly Stochastic Variational Inference for Deep Gaussian Processes

ICL-SML/Doubly-Stochastic-DGP NeurIPS 2017

Existing approaches to inference in DGP models assume approximate posteriors that force independence between the layers, and do not work well in practice.

Deep Neural Networks as Gaussian Processes

brain-research/nngp ICLR 2018

As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network.

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

thegregyang/GP4A NeurIPS 2018

While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training.

Deep Kernel Learning

ziatdinovmax/gpax 6 Nov 2015

We introduce scalable deep kernels, which combine the structural properties of deep learning architectures with the non-parametric flexibility of kernel methods.

Adversarial Robustness Toolbox v1.0.0

IBM/adversarial-robustness-toolbox 3 Jul 2018

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary.

Efficiently Sampling Functions from Gaussian Process Posteriors

j-wilson/GPflowSampling ICML 2020

Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model's success hinges upon its ability to faithfully represent predictive uncertainty.

Kernels for Vector-Valued Functions: a Review

naka-tomo/multi_output_gp 30 Jun 2011

Kernel methods are among the most popular techniques in machine learning.