Search Results for author: Marius Lindauer

Found 65 papers, 35 papers with code

Towards Leveraging AutoML for Sustainable Deep Learning: A Multi-Objective HPO Approach on Deep Shift Neural Networks

no code implementations2 Apr 2024 Leona Hennig, Tanja Tornede, Marius Lindauer

Experimental results demonstrate the effectiveness of our approach, resulting in models with over 80\% in accuracy and low computational cost.

Hyperparameter Optimization

auto-sktime: Automated Time Series Forecasting

1 code implementation13 Dec 2023 Marc-André Zöller, Marius Lindauer, Marco F. Huber

The framework employs Bayesian optimization, to automatically construct pipelines from statistical, machine learning (ML) and deep neural network (DNN) models.

AutoML Bayesian Optimization +3

Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning

1 code implementation7 Sep 2023 Joseph Giovanelli, Alexander Tornede, Tanja Tornede, Marius Lindauer

In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.

Hyperparameter Optimization

AutoML in Heavily Constrained Applications

1 code implementation29 Jun 2023 Felix Neutatz, Marius Lindauer, Ziawasch Abedjan

In this paper, we propose CAML, which uses meta-learning to automatically adapt its own AutoML parameters, such as the search strategy, the validation strategy, and the search space, for a task at hand.

AutoML Meta-Learning

Structure in Reinforcement Learning: A Survey and Open Problems

no code implementations28 Jun 2023 Aditya Mohan, Amy Zhang, Marius Lindauer

We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure.

reinforcement-learning Reinforcement Learning (RL)

Automated Machine Learning for Remaining Useful Life Predictions

no code implementations21 Jun 2023 Marc-André Zöller, Fabian Mauthe, Peter Zeiler, Marius Lindauer, Marco F. Huber

Recently, data-driven approaches to RUL predictions are becoming prevalent over model-based approaches since no underlying physical knowledge of the engineering system is required.

AutoML Management

Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

1 code implementation7 Jun 2023 Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer

Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.

Bayesian Optimization Benchmarking

Hyperparameters in Reinforcement Learning and How To Tune Them

1 code implementation2 Jun 2023 Theresa Eimer, Marius Lindauer, Roberta Raileanu

In order to improve reproducibility, deep reinforcement learning (RL) has been adopting better scientific practices such as standardized evaluation metrics and reporting.

Hyperparameter Optimization reinforcement-learning +1

Learning Activation Functions for Sparse Neural Networks

1 code implementation18 May 2023 Mohammad Loni, Aditya Mohan, Mehdi Asadi, Marius Lindauer

By conducting experiments on popular DNN models (LeNet-5, VGG-16, ResNet-18, and EfficientNet-B0) trained on MNIST, CIFAR-10, and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 15. 53%, 8. 88%, and 6. 33% absolute improvement in the accuracy for LeNet-5, VGG-16, and ResNet-18 over the default training protocols, especially at high pruning ratios.

Hyperparameter Optimization

AutoRL Hyperparameter Landscapes

1 code implementation5 Apr 2023 Aditya Mohan, Carolin Benjamins, Konrad Wienecke, Alexander Dockhorn, Marius Lindauer

Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN, PPO, and SAC) in different kinds of environments (Cartpole, Bipedal Walker, and Hopper) This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.

Hyperparameter Optimization Open-Ended Question Answering +1

Hyperparameters in Contextual RL are Highly Situational

1 code implementation21 Dec 2022 Theresa Eimer, Carolin Benjamins, Marius Lindauer

Although Reinforcement Learning (RL) has shown impressive results in games and simulation, real-world application of RL suffers from its instability under changing environment conditions and hyperparameters.

Hyperparameter Optimization reinforcement-learning +1

Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis

1 code implementation17 Nov 2022 Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr

Bayesian optimization (BO) algorithms form a class of surrogate-based heuristics, aimed at efficiently computing high-quality solutions for numerical black-box optimization problems.

AutoML Bayesian Optimization

DeepCAVE: An Interactive Analysis Tool for Automated Machine Learning

2 code implementations7 Jun 2022 René Sass, Eddie Bergman, André Biedenkapp, Frank Hutter, Marius Lindauer

Automated Machine Learning (AutoML) is used more than ever before to support users in determining efficient hyperparameters, neural architectures, or even full machine learning pipelines.

AutoML BIG-bench Machine Learning +1

Towards Meta-learned Algorithm Selection using Implicit Fidelity Information

no code implementations7 Jun 2022 Aditya Mohan, Tim Ruhkopf, Marius Lindauer

Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to.

Automated Dynamic Algorithm Configuration

1 code implementation27 May 2022 Steven Adriaensen, André Biedenkapp, Gresa Shala, Noor Awad, Theresa Eimer, Marius Lindauer, Frank Hutter

The performance of an algorithm often critically depends on its parameter configuration.

POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning

no code implementations23 May 2022 Frederik Schubert, Carolin Benjamins, Sebastian Döhler, Bodo Rosenhahn, Marius Lindauer

The goal of Unsupervised Reinforcement Learning (URL) is to find a reward-agnostic prior policy on a task domain, such that the sample-efficiency on supervised downstream tasks is improved.

Open-Ended Question Answering reinforcement-learning +2

Efficient Automated Deep Learning for Time Series Forecasting

1 code implementation11 May 2022 Difan Deng, Florian Karl, Frank Hutter, Bernd Bischl, Marius Lindauer

In contrast to common NAS search spaces, we designed a novel neural architecture search space covering various state-of-the-art architectures, allowing for an efficient macro-search over different DL approaches.

Bayesian Optimization Neural Architecture Search +2

$π$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

1 code implementation23 Apr 2022 Carl Hvarfner, Danny Stoll, Artur Souza, Marius Lindauer, Frank Hutter, Luigi Nardi

To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user.

Bayesian Optimization Hyperparameter Optimization

Practitioner Motives to Select Hyperparameter Optimization Methods

no code implementations3 Mar 2022 Niklas Hasebrook, Felix Morsbach, Niclas Kannengießer, Marc Zöller, Jörg Franke, Marius Lindauer, Frank Hutter, Ali Sunyaev

Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models.

Bayesian Optimization BIG-bench Machine Learning +1

Contextualize Me -- The Case for Context in Reinforcement Learning

1 code implementation9 Feb 2022 Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, Sebastian Döhler, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer

While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes.

reinforcement-learning Reinforcement Learning (RL) +1

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

no code implementations11 Jan 2022 Jack Parker-Holder, Raghu Rajan, Xingyou Song, André Biedenkapp, Yingjie Miao, Theresa Eimer, Baohe Zhang, Vu Nguyen, Roberto Calandra, Aleksandra Faust, Frank Hutter, Marius Lindauer

The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents.

AutoML Meta-Learning +2

Searching in the Forest for Local Bayesian Optimization

no code implementations10 Nov 2021 Difan Deng, Marius Lindauer

Because of its sample efficiency, Bayesian optimization (BO) has become a popular approach dealing with expensive black-box optimization problems, such as hyperparameter optimization (HPO).

Bayesian Optimization Hyperparameter Optimization

CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning

1 code implementation5 Oct 2021 Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer

While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment.

Physical Simulations reinforcement-learning +2

$\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization

no code implementations ICLR 2022 Carl Hvarfner, Danny Stoll, Artur Souza, Luigi Nardi, Marius Lindauer, Frank Hutter

To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user.

Bayesian Optimization Hyperparameter Optimization

Developing Open Source Educational Resources for Machine Learning and Data Science

no code implementations28 Jul 2021 Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl

It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).

BIG-bench Machine Learning

Well-tuned Simple Nets Excel on Tabular Datasets

1 code implementation NeurIPS 2021 Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka

Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures.

Automatic Risk Adaptation in Distributional Reinforcement Learning

no code implementations11 Jun 2021 Frederik Schubert, Theresa Eimer, Bodo Rosenhahn, Marius Lindauer

The use of Reinforcement Learning (RL) agents in practical applications requires the consideration of suboptimal outcomes, depending on the familiarity of the agent with its environment.

Distributional Reinforcement Learning reinforcement-learning +1

TempoRL: Learning When to Act

1 code implementation9 Jun 2021 André Biedenkapp, Raghu Rajan, Frank Hutter, Marius Lindauer

Reinforcement learning is a powerful approach to learn behaviour through interactions with an environment.

Q-Learning

Self-Paced Context Evaluation for Contextual Reinforcement Learning

1 code implementation9 Jun 2021 Theresa Eimer, André Biedenkapp, Frank Hutter, Marius Lindauer

Reinforcement learning (RL) has made a lot of advances for solving a single problem in a given environment; but learning policies that generalize to unseen variations of a problem remains challenging.

reinforcement-learning Reinforcement Learning (RL)

DACBench: A Benchmark Library for Dynamic Algorithm Configuration

1 code implementation18 May 2021 Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriaensen, Frank Hutter, Marius Lindauer

Dynamic Algorithm Configuration (DAC) aims to dynamically control a target algorithm's hyperparameters in order to improve its performance.

Benchmarking

Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization

1 code implementation ICML Workshop AutoML 2021 Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, Frank Hutter

Neural architecture search (NAS) and hyperparameter optimization (HPO) make deep learning accessible to non-experts by automatically finding the architecture of the deep neural network to use and tuning the hyperparameters of the used training pipeline.

Hyperparameter Optimization Neural Architecture Search

Regularization Cocktails

no code implementations1 Jan 2021 Arlind Kadra, Marius Lindauer, Frank Hutter, Josif Grabocka

The regularization of prediction models is arguably the most crucial ingredient that allows Machine Learning solutions to generalize well on unseen data.

Hyperparameter Optimization

Neural Model-based Optimization with Right-Censored Observations

no code implementations29 Sep 2020 Katharina Eggensperger, Kai Haase, Philipp Müller, Marius Lindauer, Frank Hutter

When fitting a regression model to predict the distribution of the outcomes, we cannot simply drop these right-censored observations, but need to properly model them.

regression Thompson Sampling

Prior-guided Bayesian Optimization

no code implementations28 Sep 2020 Artur Souza, Luigi Nardi, Leonardo Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter

While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts.

Bayesian Optimization

Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning

4 code implementations8 Jul 2020 Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, Frank Hutter

Automated Machine Learning (AutoML) supports practitioners and researchers with the tedious task of designing machine learning pipelines and has recently achieved substantial success.

AutoML BIG-bench Machine Learning +1

Bayesian Optimization with a Prior for the Optimum

no code implementations25 Jun 2020 Artur Souza, Luigi Nardi, Leonardo B. Oliveira, Kunle Olukotun, Marius Lindauer, Frank Hutter

We show that BOPrO is around 6. 67x faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application.

Bayesian Optimization

Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL

2 code implementations24 Jun 2020 Lucas Zimmer, Marius Lindauer, Frank Hutter

While early AutoML frameworks focused on optimizing traditional ML pipelines and their hyperparameters, a recent trend in AutoML is to focus on neural architecture search.

Neural Architecture Search

Learning Heuristic Selection with Dynamic Algorithm Configuration

1 code implementation15 Jun 2020 David Speck, André Biedenkapp, Frank Hutter, Robert Mattmüller, Marius Lindauer

We show that dynamic algorithm configuration can be used for dynamic heuristic selection which takes into account the internal search dynamics of a planning system.

Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework

1 code implementation1 Jun 2020 André Biedenkapp, H. Furkan Bozkurt, Theresa Eimer, Frank Hutter, Marius Lindauer

The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on parameter tuning.

General Reinforcement Learning

Best Practices for Scientific Research on Neural Architecture Search

no code implementations5 Sep 2019 Marius Lindauer, Frank Hutter

Finding a well-performing architecture is often tedious for both DL practitioners and researchers, leading to tremendous interest in the automation of this task by means of neural architecture search (NAS).

BIG-bench Machine Learning Neural Architecture Search

BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters

1 code implementation16 Aug 2019 Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Joshua Marben, Philipp Müller, Frank Hutter

Hyperparameter optimization and neural architecture search can become prohibitively expensive for regular black-box Bayesian optimization because the training and evaluation of a single model can easily take several hours.

Bayesian Optimization Hyperparameter Optimization +1

Towards White-box Benchmarks for Algorithm Control

no code implementations18 Jun 2019 André Biedenkapp, H. Furkan Bozkurt, Frank Hutter, Marius Lindauer

The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on tuned hyperparameter configurations.

Reinforcement Learning (RL) valid

Towards Automatically-Tuned Deep Neural Networks

2 code implementations18 May 2019 Hector Mendoza, Aaron Klein, Matthias Feurer, Jost Tobias Springenberg, Matthias Urban, Michael Burkart, Maximilian Dippel, Marius Lindauer, Frank Hutter

Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks.

AutoML BIG-bench Machine Learning

The Algorithm Selection Competitions 2015 and 2017

no code implementations3 May 2018 Marius Lindauer, Jan N. van Rijn, Lars Kotthoff

The algorithm selection problem is to choose the most suitable algorithm for solving a given problem instance.

Neural Networks for Predicting Algorithm Runtime Distributions

no code implementations22 Sep 2017 Katharina Eggensperger, Marius Lindauer, Frank Hutter

Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance.

Warmstarting of Model-based Algorithm Configuration

no code implementations14 Sep 2017 Marius Lindauer, Frank Hutter

The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem.

Pitfalls and Best Practices in Algorithm Configuration

2 code implementations17 May 2017 Katharina Eggensperger, Marius Lindauer, Frank Hutter

Good parameter settings are crucial to achieve high performance in many areas of artificial intelligence (AI), such as propositional satisfiability solving, AI planning, scheduling, and machine learning (in particular deep learning).

Experimental Design Scheduling

Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

no code implementations30 Mar 2017 Katharina Eggensperger, Marius Lindauer, Holger H. Hoos, Frank Hutter, Kevin Leyton-Brown

In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures.

Benchmarking Hyperparameter Optimization

ASlib: A Benchmark Library for Algorithm Selection

2 code implementations8 Jun 2015 Bernd Bischl, Pascal Kerschke, Lars Kotthoff, Marius Lindauer, Yuri Malitsky, Alexandre Frechette, Holger Hoos, Frank Hutter, Kevin Leyton-Brown, Kevin Tierney, Joaquin Vanschoren

To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature.

The Configurable SAT Solver Challenge (CSSC)

no code implementations5 May 2015 Frank Hutter, Marius Lindauer, Adrian Balint, Sam Bayless, Holger Hoos, Kevin Leyton-Brown

It is well known that different solution strategies work well for different types of instances of hard combinatorial problems.

claspfolio 2: Advances in Algorithm Selection for Answer Set Programming

no code implementations7 May 2014 Holger Hoos, Marius Lindauer, Torsten Schaub

The claspfolio 2 solver framework supports various feature generators, solver selection approaches, solver portfolios, as well as solver-schedule-based pre-solving techniques.

Solver Scheduling via Answer Set Programming

no code implementations6 Jan 2014 Holger Hoos, Roland Kaminski, Marius Lindauer, Torsten Schaub

Although Boolean Constraint Technology has made tremendous progress over the last decade, the efficacy of state-of-the-art solvers is known to vary considerably across different types of problem instances and is known to depend strongly on algorithm parameters.

Benchmarking Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.