Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.
We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation.
As the field of data science continues to grow, there will be an ever-increasing demand for tools that make machine learning accessible to non-experts.
Over the past decade, data science and machine learning has grown from a mysterious art form to a staple tool across a variety of fields in academia, business, and government.
AutoML serves as the bridge between varying levels of expertise when designing machine learning systems and expedites the data science process.
With the demand for machine learning increasing, so does the demand for tools which make it easier to use.
The basic features of some of the most versatile and popular open source frameworks for machine learning (TensorFlow, Deep Learning4j, and H2O) are considered and compared.
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization.
Supplementary Material for Efficient and Robust Automated Machine Learning
We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions.