no code implementations • 21 Jul 2023 • Jonathan Gadea Harder, Timo Kötzing, Xiaoyue Li, Aishwarya Radhakrishnan
Furthermore, we show that RLS with step size adaptation achieves an optimization time of $\Theta(n \cdot \log(|a|_1))$.
no code implementations • 29 May 2023 • Tobias Friedrich, Timo Kötzing, Aneta Neumann, Frank Neumann, Aishwarya Radhakrishnan
Understanding how evolutionary algorithms perform on constrained problems has gained increasing attention in recent years.
1 code implementation • 13 Feb 2023 • Markus Wagner, Erik Kohlros, Gerome Quantmeyer, Timo Kötzing
We provide an open source framework to experiment with evolutionary algorithms which we call "Experimenting and Learning toolkit for Evolutionary Algorithms (ELEA)".
no code implementations • 24 Nov 2022 • Tobias Friedrich, Timo Kötzing, Frank Neumann, Aishwarya Radhakrishnan
Estimation of distribution algorithms (EDAs) provide a distribution - based approach for optimization which adapts its probability distribution during the run of the algorithm.
no code implementations • 7 Apr 2021 • Benjamin Doerr, Timo Kötzing
One of the first and easy to use techniques for proving run time bounds for evolutionary algorithms is the so-called method of fitness levels by Wegener.
no code implementations • 15 Oct 2020 • Vanja Doskoč, Timo Kötzing
In particular, we show that explanatory monotone learners, although known to be strictly stronger, do (almost) preserve the pairwise relation as seen in strongly monotone learning.
no code implementations • 15 Oct 2020 • Vanja Doskoč, Timo Kötzing
Such results are key to understanding the, yet undiscovered, mutual relation between various important learning paradigms when learning behaviourally correctly.
no code implementations • 15 Oct 2020 • Julian Berger, Maximilian Böther, Vanja Doskoč, Jonathan Gadea Harder, Nicolas Klodt, Timo Kötzing, Winfried Lötzsch, Jannik Peters, Leon Schiller, Lars Seifert, Armin Wells, Simon Wietheger
We study learning of indexed families from positive data where a learner can freely choose a hypothesis space (with uniformly decidable membership) comprising at least the languages to be learned.
no code implementations • 15 Oct 2020 • Julian Berger, Maximilian Böther, Vanja Doskoč, Jonathan Gadea Harder, Nicolas Klodt, Timo Kötzing, Winfried Lötzsch, Jannik Peters, Leon Schiller, Lars Seifert, Armin Wells, Simon Wietheger
This so-called $W$-index allows for naming arbitrary computably enumerable languages, with the drawback that even the membership problem is undecidable.
no code implementations • 9 Oct 2020 • Timo Kötzing, Karen Seidel
We investigate learning collections of languages from texts by an inductive inference machine with access to the current datum and a bounded memory in form of states.
no code implementations • 7 Oct 2020 • Ardalan Khazraei, Timo Kötzing, Karen Seidel
In order to model an efficient learning paradigm, iterative learning algorithms access data one by one, updating the current hypothesis without regress to past data.
no code implementations • 12 Jun 2020 • Timo Kötzing, Carsten Witt
Fixed-budget theory is concerned with computing or bounding the fitness value achievable by randomized search heuristics within a given budget of fitness function evaluations.
no code implementations • 11 Apr 2019 • Benjamin Doerr, Timo Kötzing
Drift analysis aims at translating the expected progress of an evolutionary algorithm (or more generally, a random process) into a probabilistic guarantee on its run time (hitting time).
no code implementations • 6 Jun 2018 • Benjamin Doerr, Timo Kötzing, J. A. Gregor Lagodzinski, Johannes Lengler
While many optimization problems work with a fixed number of decision variables and thus a fixed-length representation of possible solutions, genetic programming (GP) works on variable-length representations.
no code implementations • 4 Jun 2018 • Clemens Frahnow, Timo Kötzing
We show that, while the (1+1) EA gets stuck in a bad local optimum and incurs a run time of $\Theta(n^{2r})$ fitness evaluations on FORK, island models with a complete topology can achieve a run time of $\Theta(n^{1. 5r})$ by making use of rare migrations in order to explore the search space more effectively.
2 code implementations • 25 May 2018 • Timo Kötzing, J. A. Gregor Lagodzinski, Johannes Lengler, Anna Melnichenko
We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
no code implementations • 22 May 2018 • Timo Kötzing, Martin S. Krejca
As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift.
no code implementations • 31 Jan 2018 • Martin Aschenbach, Timo Kötzing, Karen Seidel
Learning from positive and negative information, so-called \emph{informants}, being one of the models for human and machine learning introduced by E.~M.~Gold, is investigated.
no code implementations • 13 Sep 2016 • Tobias Friedrich, Timo Kötzing, Markus Wagner
A common strategy for improving optimization algorithms is to restart the algorithm when it is believed to be trapped in an inferior part of the search space.
no code implementations • 10 Aug 2016 • Duc-Cuong Dang, Tobias Friedrich, Timo Kötzing, Martin S. Krejca, Per Kristian Lehre, Pietro S. Oliveto, Dirk Sudholt, Andrew M. Sutton
This proves a sizeable advantage of all variants of the ($\mu$+1) GA compared to (1+1) EA, which requires time $\Theta(n^k)$.
no code implementations • 12 Apr 2016 • Benjamin Doerr, Carola Doerr, Timo Kötzing
The most common representation in evolutionary computation are bit strings.
no code implementations • 19 Jun 2015 • Benjamin Doerr, Carola Doerr, Timo Kötzing
For their setting, in which the solution length is sampled from a geometric distribution, we provide mutation rates that yield an expected optimization time that is of the same order as that of the (1+1) EA knowing the solution length.
no code implementations • 10 Feb 2015 • Tobias Friedrich, Timo Kötzing, Martin Krejca, Andrew M. Sutton
For this, we model sexual recombination with a simple estimation of distribution algorithm called the Compact Genetic Algorithm (cGA), which we compare with the classical $\mu+1$ EA.
no code implementations • 29 Apr 2014 • Timo Kötzing, Raphaela Palenta
We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another.
no code implementations • 30 Mar 2014 • Benjamin Doerr, Carola Doerr, Timo Kötzing
We analyze the unbiased black-box complexity of jump functions with small, medium, and large sizes of the fitness plateau surrounding the optimal solution.