Gaussian Processes
566 papers with code • 1 benchmarks • 5 datasets
Gaussian Processes is a powerful framework for several machine learning tasks such as regression, classification and inference. Given a finite set of input output training data that is generated out of a fixed (but possibly unknown) function, the framework models the unknown function as a stochastic process such that the training outputs are a finite number of jointly Gaussian random variables, whose properties can then be used to infer the statistics (the mean and variance) of the function at test values of input.
Libraries
Use these libraries to find Gaussian Processes models and implementationsLatest papers
Spatio-Temporal Attention and Gaussian Processes for Personalized Video Gaze Estimation
Additionally, our approach integrates Gaussian processes to include individual-specific traits, facilitating the personalization of our model with just a few labeled samples.
Deep Gaussian Covariance Network with Trajectory Sampling for Data-Efficient Policy Search
We compare trajectory sampling with density-based approximation for uncertainty propagation using three different probabilistic world models; Gaussian processes, Bayesian neural networks, and DGCNs.
Hyperbolic Secant representation of the logistic function: Application to probabilistic Multiple Instance Learning for CT intracranial hemorrhage detection
This approach yields the same variational posterior approximations as the original VGPMIL, which is a consequence of the two representations that the Hyperbolic Secant distribution admits.
A tutorial on learning from preferences and choices with Gaussian Processes
Preference modelling lies at the intersection of economics, decision theory, machine learning and statistics.
Function-space Parameterization of Neural Networks for Sequential Learning
Our parameterization offers: (i) a way to scale function-space methods to large data sets via sparsification, (ii) retention of prior knowledge when access to past data is limited, and (iii) a mechanism to incorporate new data without retraining.
Is Data All That Matters? The Role of Control Frequency for Learning-Based Sampled-Data Control of Uncertain Systems
While a strong focus has been placed on increasing the amount and quality of data to improve performance, data can never fully eliminate uncertainty, making feedback necessary to ensure stability and performance.
Chronos: Learning the Language of Time Series
We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models.
Explainable Learning with Gaussian Processes
When using integrated gradients as an attribution method, we show that the attributions of a GPR model also follow a Gaussian process distribution, which quantifies the uncertainty in attribution arising from uncertainty in the model.
Efficiently Computable Safety Bounds for Gaussian Processes in Active Learning
Active learning of physical systems must commonly respect practical safety constraints, which restricts the exploration of the design space.
Global Safe Sequential Learning via Efficient Knowledge Transfer
As transferable source knowledge is often available in safety critical experiments, we propose to consider transfer safe sequential learning to accelerate the learning of safety.