Hyperparameter Optimization
277 papers with code • 1 benchmarks • 3 datasets
Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.
Libraries
Use these libraries to find Hyperparameter Optimization models and implementationsLatest papers with no code
Simple Hack for Transformers against Heavy Long-Text Classification on a Time- and Memory-Limited GPU Service
Using the best hack found, we then compare 512, 256, and 128 tokens length.
Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates
We study the problem of efficiently computing the derivative of the fixed-point of a parametric non-differentiable contraction map.
Large Language Models to Generate System-Level Test Programs Targeting Non-functional Properties
System-Level Test (SLT) has been a part of the test flow for integrated circuits for over a decade and still gains importance.
Data augmentation with automated machine learning: approaches and performance comparison with classical data augmentation methods
Finally, we carried out an extensive comparison and analysis of the performance of automated data augmentation techniques and state-of-the-art methods based on classical augmentation approaches.
Adaptive Hyperparameter Optimization for Continual Learning Scenarios
This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand.
A machine learning workflow to address credit default prediction
Due to the recent increase in interest in Financial Technology (FinTech), applications like credit default prediction (CDP) are gaining significant industrial and academic attention.
Transformers for Low-Resource Languages:Is Féidir Linn!
The Transformer model is the state-of-the-art in Machine Translation.
Exploratory Landscape Analysis for Mixed-Variable Problems
We provide a comprehensive juxtaposition of the results based on these different techniques.
AutoMMLab: Automatically Generating Deployable Models from Language Instructions for Computer Vision Tasks
Automated machine learning (AutoML) is a collection of techniques designed to automate the machine learning development process.
FlexHB: a More Efficient and Flexible Framework for Hyperparameter Optimization
Comprehensive study on FlexHB shows that (1) our fine-grained fidelity method considerably enhances the efficiency of searching optimal configurations, (2) our FlexBand framework (self-adaptive allocation of SH brackets, and global ranking of configurations in both current and past SH procedures) grants the algorithm with more flexibility and improves the anytime performance.