Search Results for author: Hadi S. Jomaa

Found 7 papers, 4 papers with code

Zero-Shot AutoML with Pretrained Models

1 code implementation16 Jun 2022 Ekrem Öztürk, Fabio Ferreira, Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka, Frank Hutter

Given a new dataset D and a low compute budget, how should we choose a pre-trained model to fine-tune to D, and set the fine-tuning hyperparameters without risking overfitting, particularly if D is small?

AutoML Meta-Learning

Improving Hyperparameter Optimization by Planning Ahead

no code implementations15 Oct 2021 Hadi S. Jomaa, Jonas Falkner, Lars Schmidt-Thieme

Hyperparameter optimization (HPO) is generally treated as a bi-level optimization problem that involves fitting a (probabilistic) surrogate model to a set of observed hyperparameter responses, e. g. validation loss, and consequently maximizing an acquisition function using a surrogate model to identify good hyperparameter candidates for evaluation.

Hyperparameter Optimization Model-based Reinforcement Learning +4

HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML

1 code implementation11 Jun 2021 Sebastian Pineda Arango, Hadi S. Jomaa, Martin Wistuba, Josif Grabocka

Hyperparameter optimization (HPO) is a core problem for the machine learning community and remains largely unsolved due to the significant computational resources required to evaluate hyperparameter configurations.

Hyperparameter Optimization Transfer Learning

Hyperparameter Optimization with Differentiable Metafeatures

no code implementations7 Feb 2021 Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka

In contrast to existing models, DMFBS i) integrates a differentiable metafeature extractor and ii) is optimized using a novel multi-task loss, linking manifold regularization with a dataset similarity measure learned via an auxiliary dataset identification meta-task, effectively enforcing the response approximation for similar datasets to be similar.

Hyperparameter Optimization

Hyp-RL : Hyperparameter Optimization by Reinforcement Learning

1 code implementation27 Jun 2019 Hadi S. Jomaa, Josif Grabocka, Lars Schmidt-Thieme

More recently, methods have been introduced that build a so-called surrogate model that predicts the validation loss for a specific hyperparameter setting, model and dataset and then sequentially select the next hyperparameter to test, based on a heuristic function of the expected value and the uncertainty of the surrogate model called acquisition function (sequential model-based Bayesian optimization, SMBO).

Bayesian Optimization Hyperparameter Optimization +2

In Hindsight: A Smooth Reward for Steady Exploration

no code implementations24 Jun 2019 Hadi S. Jomaa, Josif Grabocka, Lars Schmidt-Thieme

In classical Q-learning, the objective is to maximize the sum of discounted rewards through iteratively using the Bellman equation as an update, in an attempt to estimate the action value function of the optimal policy.

Atari Games Q-Learning

Dataset2Vec: Learning Dataset Meta-Features

1 code implementation27 May 2019 Hadi S. Jomaa, Lars Schmidt-Thieme, Josif Grabocka

As a data-driven approach, meta-learning requires meta-features that represent the primary learning tasks or datasets, and are estimated traditonally as engineered dataset statistics that require expert domain knowledge tailored for every meta-task.

Auxiliary Learning Few-Shot Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.