Hyperparameter Optimization
278 papers with code • 1 benchmarks • 3 datasets
Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.
Libraries
Use these libraries to find Hyperparameter Optimization models and implementationsMost implemented papers
Practical Bayesian Optimization of Machine Learning Algorithms
In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP).
Scalable Bayesian Optimization Using Deep Neural Networks
Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations.
BOHB: Robust and Efficient Hyperparameter Optimization at Scale
Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible.
Tune: A Research Platform for Distributed Model Selection and Training
We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.
Benchmarking Automatic Machine Learning Frameworks
AutoML serves as the bridge between varying levels of expertise when designing machine learning systems and expedites the data science process.
Random Search and Reproducibility for Neural Architecture Search
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures.
Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science
As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts.
Online Learning Rate Adaptation with Hypergradient Descent
We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice.
Automatic Gradient Boosting
Automatic machine learning performs predictive modeling with high performing machine learning tools without human interference.
Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions
Empirically, our approach outperforms competing hyperparameter optimization methods on large-scale deep learning problems.