Search Results for author: Carolin Benjamins

Found 9 papers, 8 papers with code

Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

1 code implementation7 Jun 2023 Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer

Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.

Bayesian Optimization Benchmarking

AutoRL Hyperparameter Landscapes

1 code implementation5 Apr 2023 Aditya Mohan, Carolin Benjamins, Konrad Wienecke, Alexander Dockhorn, Marius Lindauer

Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN, PPO, and SAC) in different kinds of environments (Cartpole, Bipedal Walker, and Hopper) This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.

Hyperparameter Optimization Open-Ended Question Answering +1

Hyperparameters in Contextual RL are Highly Situational

1 code implementation21 Dec 2022 Theresa Eimer, Carolin Benjamins, Marius Lindauer

Although Reinforcement Learning (RL) has shown impressive results in games and simulation, real-world application of RL suffers from its instability under changing environment conditions and hyperparameters.

Hyperparameter Optimization reinforcement-learning +1

Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis

1 code implementation17 Nov 2022 Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr

Bayesian optimization (BO) algorithms form a class of surrogate-based heuristics, aimed at efficiently computing high-quality solutions for numerical black-box optimization problems.

AutoML Bayesian Optimization

POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning

no code implementations23 May 2022 Frederik Schubert, Carolin Benjamins, Sebastian Döhler, Bodo Rosenhahn, Marius Lindauer

The goal of Unsupervised Reinforcement Learning (URL) is to find a reward-agnostic prior policy on a task domain, such that the sample-efficiency on supervised downstream tasks is improved.

Open-Ended Question Answering reinforcement-learning +2

Contextualize Me -- The Case for Context in Reinforcement Learning

1 code implementation9 Feb 2022 Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer

While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes.

reinforcement-learning Reinforcement Learning (RL) +1

CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning

1 code implementation5 Oct 2021 Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer

While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment.

Physical Simulations reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.