Gaussian Processes

573 papers with code • 1 benchmarks • 5 datasets

Gaussian Processes is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input.

Source: Sequential Randomized Matrix Factorization for Gaussian Processes: Efficient Predictions and Hyper-parameter Optimization

Libraries

Use these libraries to find Gaussian Processes models and implementations

Subtasks


Latest papers with no code

Tensor Network-Constrained Kernel Machines as Gaussian Processes

no code yet • 28 Mar 2024

We analyze the convergence of both CPD and TT-constrained models, and show how TT yields models exhibiting more GP behavior compared to CPD, for the same number of model parameters.

A Unified Kernel for Neural Network Learning

no code yet • 26 Mar 2024

Two predominant approaches have emerged: the Neural Network Gaussian Process (NNGP) and the Neural Tangent Kernel (NTK).

Multi-Agent Clarity-Aware Dynamic Coverage with Gaussian Processes

no code yet • 26 Mar 2024

This paper presents two algorithms for multi-agent dynamic coverage in spatiotemporal environments, where the coverage algorithms are informed by the method of data assimilation.

Learning Piecewise Residuals of Control Barrier Functions for Safety of Switching Systems using Multi-Output Gaussian Processes

no code yet • 26 Mar 2024

This uncertainty results in piecewise residuals for each switching surface, impacting the CLF and CBF constraints.

Guided Bayesian Optimization: Data-Efficient Controller Tuning with Digital Twin

no code yet • 25 Mar 2024

This article presents the guided Bayesian optimization algorithm as an efficient data-driven method for iteratively tuning closed-loop controller parameters using an event-triggered digital twin of the system based on available closed-loop data.

Kernel Multigrid: Accelerate Back-fitting via Sparse Gaussian Process Regression

no code yet • 20 Mar 2024

By utilizing a technique called Kernel Packets (KP), we prove that the convergence rate of Back-fitting is no faster than $(1-\mathcal{O}(\frac{1}{n}))^t$, where $n$ and $t$ denote the data size and the iteration number, respectively.

Composite likelihood estimation of stationary Gaussian processes with a view toward stochastic volatility

no code yet • 19 Mar 2024

We develop a framework for composite likelihood inference of parametric continuous-time stationary Gaussian processes.

Informed Spectral Normalized Gaussian Processes for Trajectory Prediction

no code yet • 18 Mar 2024

Previous work has shown that using such informative priors to regularize probabilistic deep learning (DL) models increases their performance and data-efficiency.

A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling

no code yet • 16 Mar 2024

Numerical solvers of partial differential equations (PDEs) have been widely employed for simulating physical systems.

On the Laplace Approximation as Model Selection Criterion for Gaussian Processes

no code yet • 14 Mar 2024

Our model selection criteria allow significantly faster and high quality model selection of Gaussian process models.