Bayesian Optimization
525 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Bayesian Optimization
Libraries
Use these libraries to find Bayesian Optimization models and implementationsMost implemented papers
Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search
Computational models in fields such as computational neuroscience are often evaluated via stochastic simulation or numerical approximation.
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics software.
Differentiable Compositional Kernel Learning for Gaussian Processes
The NKN architecture is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel.
BOHB: Robust and Efficient Hyperparameter Optimization at Scale
Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.
Pre-trained Gaussian processes for Bayesian optimization
Contrary to a common expectation that BO is suited to optimizing black-box functions, it actually requires domain knowledge about those functions to deploy BO successfully.
Output Space Entropy Search Framework for Multi-Objective Bayesian Optimization
We consider the problem of black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments), where the goal is to approximate the true Pareto set of solutions by minimizing the total resource cost of experiments.
Bayesian Optimization with Safety Constraints: Safe and Automatic Parameter Tuning in Robotics
While an initial guess for the parameters may be obtained from dynamic models of the robot, parameters are usually tuned manually on the real system to achieve the best performance.
Grammar Variational Autoencoder
Crucially, state-of-the-art methods often produce outputs that are not valid.
Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search
While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal.
Deep Bayesian Optimization on Attributed Graphs
Attributed graphs, which contain rich contextual features beyond just network structure, are ubiquitous and have been observed to benefit various network analytics applications.