Gaussian Processes

568 papers with code • 1 benchmarks • 5 datasets

Gaussian Processes is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input.

Source: Sequential Randomized Matrix Factorization for Gaussian Processes: Efficient Predictions and Hyper-parameter Optimization

Libraries

Use these libraries to find Gaussian Processes models and implementations

Subtasks


Latest papers with no code

Neural Operator induced Gaussian Process framework for probabilistic solution of parametric partial differential equations

no code yet • 24 Apr 2024

The study of neural operators has paved the way for the development of efficient approaches for solving partial differential equations (PDEs) compared with traditional methods.

A New Reliable & Parsimonious Learning Strategy Comprising Two Layers of Gaussian Processes, to Address Inhomogeneous Empirical Correlation Structures

no code yet • 18 Apr 2024

We present a new strategy for learning the functional relation between a pair of variables, while addressing inhomogeneities in the correlation structure of the available data, by modelling the sought function as a sample function of a non-stationary Gaussian Process (GP), that nests within itself multiple other GPs, each of which we prove can be stationary, thereby establishing sufficiency of two GP layers.

Analytical results for uncertainty propagation through trained machine learning regression models

no code yet • 17 Apr 2024

Machine learning (ML) models are increasingly being used in metrology applications.

BayesJudge: Bayesian Kernel Language Modelling with Confidence Uncertainty in Legal Judgment Prediction

no code yet • 16 Apr 2024

Predicting legal judgments with reliable confidence is paramount for responsible legal AI applications.

Label Propagation Training Schemes for Physics-Informed Neural Networks and Gaussian Processes

no code yet • 8 Apr 2024

This paper proposes a semi-supervised methodology for training physics-informed machine learning methods.

Conditioning of Banach Space Valued Gaussian Random Variables: An Approximation Approach Based on Martingales

no code yet • 4 Apr 2024

In this paper we investigate the conditional distributions of two Banach space valued, jointly Gaussian random variables.

Universal Functional Regression with Neural Operator Flows

no code yet • 3 Apr 2024

We empirically study the performance of OpFlow on regression and generation tasks with data generated from Gaussian processes with known posterior forms and non-Gaussian processes, as well as real-world earthquake seismograms with an unknown closed-form distribution.

Tensor Network-Constrained Kernel Machines as Gaussian Processes

no code yet • 28 Mar 2024

We analyze the convergence of both CPD and TT-constrained models, and show how TT yields models exhibiting more GP behavior compared to CPD, for the same number of model parameters.

A Unified Kernel for Neural Network Learning

no code yet • 26 Mar 2024

Two predominant approaches have emerged: the Neural Network Gaussian Process (NNGP) and the Neural Tangent Kernel (NTK).

Multi-Agent Clarity-Aware Dynamic Coverage with Gaussian Processes

no code yet • 26 Mar 2024

This paper presents two algorithms for multi-agent dynamic coverage in spatiotemporal environments, where the coverage algorithms are informed by the method of data assimilation.