Hyperparameter Optimization

277 papers with code • 1 benchmarks • 3 datasets

Hyperparameter Optimization is the problem of choosing a set of optimal hyperparameters for a learning algorithm. Whether the algorithm is suitable for the data directly depends on hyperparameters, which directly influence overfitting or underfitting. Each model requires different assumptions, weights or training speeds for different types of data under the conditions of a given loss function.

Source: Data-driven model for fracturing design optimization: focus on building digital database and production forecast

Libraries

Use these libraries to find Hyperparameter Optimization models and implementations
3 papers
7,373
3 papers
1,414
See all 14 libraries.

Latest papers with no code

Simple Hack for Transformers against Heavy Long-Text Classification on a Time- and Memory-Limited GPU Service

no code yet • 19 Mar 2024

Using the best hack found, we then compare 512, 256, and 128 tokens length.

Nonsmooth Implicit Differentiation: Deterministic and Stochastic Convergence Rates

no code yet • 18 Mar 2024

We study the problem of efficiently computing the derivative of the fixed-point of a parametric non-differentiable contraction map.

Large Language Models to Generate System-Level Test Programs Targeting Non-functional Properties

no code yet • 15 Mar 2024

System-Level Test (SLT) has been a part of the test flow for integrated circuits for over a decade and still gains importance.

Data augmentation with automated machine learning: approaches and performance comparison with classical data augmentation methods

no code yet • 13 Mar 2024

Finally, we carried out an extensive comparison and analysis of the performance of automated data augmentation techniques and state-of-the-art methods based on classical augmentation approaches.

Adaptive Hyperparameter Optimization for Continual Learning Scenarios

no code yet • 9 Mar 2024

This paper aims to explore the role of hyperparameter selection in continual learning and the necessity of continually and automatically tuning them according to the complexity of the task at hand.

A machine learning workflow to address credit default prediction

no code yet • 6 Mar 2024

Due to the recent increase in interest in Financial Technology (FinTech), applications like credit default prediction (CDP) are gaining significant industrial and academic attention.

Transformers for Low-Resource Languages:Is Féidir Linn!

no code yet • 4 Mar 2024

The Transformer model is the state-of-the-art in Machine Translation.

Exploratory Landscape Analysis for Mixed-Variable Problems

no code yet • 26 Feb 2024

We provide a comprehensive juxtaposition of the results based on these different techniques.

AutoMMLab: Automatically Generating Deployable Models from Language Instructions for Computer Vision Tasks

no code yet • 23 Feb 2024

Automated machine learning (AutoML) is a collection of techniques designed to automate the machine learning development process.

FlexHB: a More Efficient and Flexible Framework for Hyperparameter Optimization

no code yet • 21 Feb 2024

Comprehensive study on FlexHB shows that (1) our fine-grained fidelity method considerably enhances the efficiency of searching optimal configurations, (2) our FlexBand framework (self-adaptive allocation of SH brackets, and global ranking of configurations in both current and past SH procedures) grants the algorithm with more flexibility and improves the anytime performance.