Bayesian Optimisation
85 papers with code • 0 benchmarks • 0 datasets
Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response.
Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions
Benchmarks
These leaderboards are used to track progress in Bayesian Optimisation
Libraries
Use these libraries to find Bayesian Optimisation models and implementationsLatest papers
A Quadrature Approach for General-Purpose Batch Bayesian Optimization via Probabilistic Lifting
Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspecification, and lastly fast massive parallelisation.
On the development of a practical Bayesian optimisation algorithm for expensive experiments and simulations with changing environmental conditions
ENVBO finds solutions for the full domain of the environmental variable that outperforms results from optimisation algorithms that only focus on a fixed environmental value in all but one case while using a fraction of their evaluation budget.
Automated Machine Learning for Positive-Unlabelled Learning
Positive-Unlabelled (PU) learning is a growing field of machine learning that aims to learn classifiers from data consisting of labelled positive and unlabelled instances, which can be in reality positive or negative, but whose label is unknown.
Cheetah: Bridging the Gap Between Machine Learning and Particle Accelerator Physics with High-Speed, Differentiable Simulations
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics.
Expert-guided Bayesian Optimisation for Human-in-the-loop Experimental Design of Known Systems
Domain experts often possess valuable physical insights that are overlooked in fully automated decision-making processes such as Bayesian optimisation.
Data-driven Prior Learning for Bayesian Optimisation
We replace this assumption with a weaker one only requiring the shape of the optimisation landscape to be similar, and analyse the recent method Prior Learning for Bayesian Optimisation - PLeBO - in this setting.
Stochastic Gradient Descent for Gaussian Processes Done Right
We study the optimisation problem associated with Gaussian process regression using squared loss.
Adaptive Batch Sizes for Active Learning A Probabilistic Numerics Approach
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation.
Bayesian Optimisation Against Climate Change: Applications and Benchmarks
Bayesian optimisation is a powerful method for optimising black-box functions, popular in settings where the true function is expensive to evaluate and no gradient information is available.
Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous Tuning
Online tuning of real-world plants is a complex optimisation problem that continues to require manual intervention by experienced human operators.