no code implementations • 2 May 2024 • Junaid Akhter, Paul David Fährmann, Konstantin Sonntag, Sebastian Peitz
We highlight the most common pitfalls one should avoid while using MOO techniques in ML.
no code implementations • 29 Apr 2024 • Hans Harder, Sebastian Peitz
We utilize extreme learning machines for the prediction of partial differential equations (PDEs).
no code implementations • 21 Mar 2024 • Hans Harder, Sebastian Peitz
The value function plays a crucial role as a measure for the cumulative future reward an agent receives in both reinforcement learning and optimal control.
1 code implementation • 23 Aug 2023 • Augustina C. Amakor, Konstantin Sonntag, Sebastian Peitz
To overcome this limitation, we present an algorithm that allows for the approximation of the entire Pareto front for the above-mentioned objectives in a very efficient manner for high-dimensional DNNs with millions of parameters.
no code implementations • 28 Jul 2023 • Sebastian Peitz, Hans Harder, Feliks Nüske, Friedrich Philipp, Manuel Schaller, Karl Worthmann
The Koopman operator has become an essential tool for data-driven analysis, prediction and control of complex systems, the main reason being the enormous potential of identifying linear function space representations of nonlinear dynamics from measurements.
1 code implementation • 14 Feb 2023 • Stefan Werner, Sebastian Peitz
The goal of this paper is to make a strong point for the usage of dynamical models when using reinforcement learning (RL) for feedback control of dynamical systems governed by partial differential equations (PDEs).
1 code implementation • 25 Jan 2023 • Sebastian Peitz, Jan Stenner, Vikas Chidananda, Oliver Wallscheid, Steven L. Brunton, Kunihiko Taira
We present a convolutional framework which significantly reduces the complexity and thus, the computational effort for distributed reinforcement learning control of dynamical systems governed by partial differential equations (PDEs).
1 code implementation • 20 Sep 2022 • Samuel E. Otto, Sebastian Peitz, Clarence W. Rowley
Data-driven models for nonlinear dynamical systems based on approximating the underlying Koopman operator or generator have proven to be successful tools for forecasting, feature learning, state estimation, and control.
1 code implementation • 8 Apr 2021 • Michael Dellnitz, Eyke Hüllermeier, Marvin Lücke, Sina Ober-Blöbaum, Christian Offen, Sebastian Peitz, Karlson Pfannschmidt
While the classical schemes apply very generally and are highly efficient on regular systems, they can behave sub-optimal when an inefficient step rejection mechanism is triggered by structurally complex systems such as chaotic systems.
1 code implementation • 9 Feb 2021 • Sebastian Peitz, Katharina Bieker
In other words, surrogate modeling for autonomous systems is much easier than for control systems.
no code implementations • 14 Dec 2020 • Katharina Bieker, Bennet Gebken, Sebastian Peitz
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e. g., for the training of neural networks).
no code implementations • 23 Sep 2019 • Stefan Klus, Feliks Nüske, Sebastian Peitz, Jan-Hendrik Niemann, Cecilia Clementi, Christof Schütte
We derive a data-driven method for the approximation of the Koopman generator called gEDMD, which can be regarded as a straightforward extension of EDMD (extended dynamic mode decomposition).
no code implementations • 24 May 2019 • Katharina Bieker, Sebastian Peitz, Steven L. Brunton, J. Nathan Kutz, Michael Dellnitz
The control of complex systems is of critical importance in many branches of science, engineering, and industry.
no code implementations • 16 May 2018 • Stefan Klus, Sebastian Peitz, Ingmar Schuster
Kernel transfer operators, which can be regarded as approximations of transfer operators such as the Perron-Frobenius or Koopman operator in reproducing kernel Hilbert spaces, are defined in terms of covariance and cross-covariance operators and have been shown to be closely related to the conditional mean embedding framework developed by the machine learning community.