Search Results for author: Noor Awad

Found 10 papers, 7 papers with code

Inferring Behavior-Specific Context Improves Zero-Shot Generalization in Reinforcement Learning

1 code implementation15 Apr 2024 Tidiane Camaret Ndir, André Biedenkapp, Noor Awad

In this work, we address the challenge of zero-shot generalization (ZSG) in Reinforcement Learning (RL), where agents must adapt to entirely novel environments without additional training.

reinforcement-learning Reinforcement Learning (RL) +1

MO-DEHB: Evolutionary-based Hyperband for Multi-Objective Optimization

no code implementations8 May 2023 Noor Awad, Ayushi Sharma, Philipp Muller, Janek Thomas, Frank Hutter

Hyperparameter optimization (HPO) is a powerful technique for automating the tuning of machine learning (ML) models.

Fairness Hyperparameter Optimization +1

Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML

no code implementations15 Mar 2023 Hilde Weerts, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, Frank Hutter

The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices.

AutoML Fairness

Automated Dynamic Algorithm Configuration

1 code implementation27 May 2022 Steven Adriaensen, André Biedenkapp, Gresa Shala, Noor Awad, Theresa Eimer, Marius Lindauer, Frank Hutter

The performance of an algorithm often critically depends on its parameter configuration.

DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization

2 code implementations20 May 2021 Noor Awad, Neeratyoy Mallik, Frank Hutter

Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever.

Hyperparameter Optimization Neural Architecture Search

Differential Evolution for Neural Architecture Search

1 code implementation11 Dec 2020 Noor Awad, Neeratyoy Mallik, Frank Hutter

Neural architecture search (NAS) methods rely on a search strategy for deciding which architectures to evaluate next and a performance estimation strategy for assessing their performance (e. g., using full evaluations, multi-fidelity evaluations, or the one-shot model).

Bayesian Optimization Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.