Gaussian Processes
568 papers with code • 1 benchmarks • 5 datasets
Gaussian Processes is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input.
Libraries
Use these libraries to find Gaussian Processes models and implementationsMost implemented papers
Scalable Bayesian Optimization Using Deep Neural Networks
Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations.
Convolutional Gaussian Processes
We present a practical way of introducing convolutional structure into Gaussian processes, making them more suited to high-dimensional inputs like images.
Probabilistic Recurrent State-Space Models
State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification.
Differentiable Compositional Kernel Learning for Gaussian Processes
The NKN architecture is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel.
GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration
Despite advances in scalable models, the inference tools used for Gaussian processes (GPs) have yet to fully capitalize on developments in computing hardware.
Pre-trained Gaussian processes for Bayesian optimization
Contrary to a common expectation that BO is suited to optimizing black-box functions, it actually requires domain knowledge about those functions to deploy BO successfully.
Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)
We introduce a new structured kernel interpolation (SKI) framework, which generalises and unifies inducing point methods for scalable Gaussian processes (GPs).
Thoughts on Massively Scalable Gaussian Processes
This multi-level circulant approximation allows one to unify the orthogonal computational benefits of fast Kronecker and Toeplitz approaches, and is significantly faster than either approach in isolation; 2) local kernel interpolation and inducing points to allow for arbitrarily located data inputs, and $O(1)$ test time predictions; 3) exploiting block-Toeplitz Toeplitz-block structure (BTTB), which enables fast inference and learning when multidimensional Kronecker structure is not present; and 4) projections of the input space to flexibly model correlated inputs and high dimensional data.
Scalable Log Determinants for Gaussian Process Kernel Learning
For applications as varied as Bayesian neural networks, determinantal point processes, elliptical graphical models, and kernel learning for Gaussian processes (GPs), one must compute a log determinant of an $n \times n$ positive definite matrix, and its derivatives - leading to prohibitive $\mathcal{O}(n^3)$ computations.
Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution.