Search Results for author: Thomas A. Runkler

Found 10 papers, 2 papers with code

Model-based Offline Quantum Reinforcement Learning

no code implementations14 Apr 2024 Simon Eisenmann, Daniel Hein, Steffen Udluft, Thomas A. Runkler

The policy is optimized with a gradient-free optimization scheme using the return estimate given by the model as the fitness function.

reinforcement-learning

TinyMetaFed: Efficient Federated Meta-Learning for TinyML

no code implementations13 Jul 2023 Haoyu Ren, Xue Li, Darko Anicic, Thomas A. Runkler

The field of Tiny Machine Learning (TinyML) has made substantial advancements in democratizing machine learning on low-footprint devices, such as microcontrollers.

Computational Efficiency Few-Shot Learning

TinyReptile: TinyML with Federated Meta-Learning

no code implementations11 Apr 2023 Haoyu Ren, Darko Anicic, Thomas A. Runkler

Tiny machine learning (TinyML) is a rapidly growing field aiming to democratize machine learning (ML) for resource-constrained microcontrollers (MCUs).

Federated Learning Meta-Learning

SeLoC-ML: Semantic Low-Code Engineering for Machine Learning Applications in Industrial IoT

1 code implementation18 Jul 2022 Haoyu Ren, Kirill Dorofeev, Darko Anicic, Youssef Hammad, Roland Eckl, Thomas A. Runkler

Therefore, this paper presents a framework called Semantic Low-Code Engineering for ML Applications (SeLoC-ML), built on a low-code platform to support the rapid development of ML applications in IIoT by leveraging Semantic Web technologies.

BIG-bench Machine Learning

Interpretable Control by Reinforcement Learning

no code implementations20 Jul 2020 Daniel Hein, Steffen Limmer, Thomas A. Runkler

In this paper, three recently introduced reinforcement learning (RL) methods are used to generate human-interpretable policies for the cart-pole balancing benchmark.

reinforcement-learning Reinforcement Learning (RL)

Modeling System Dynamics with Physics-Informed Neural Networks Based on Lagrangian Mechanics

no code implementations29 May 2020 Manuel A. Roehrl, Thomas A. Runkler, Veronika Brandtstetter, Michel Tokic, Stefan Obermayer

In this paper, we present physics-informed neural ordinary differential equations (PINODE), a hybrid model that combines the two modeling techniques to overcome the aforementioned problems.

Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming

no code implementations29 Apr 2018 Daniel Hein, Steffen Udluft, Thomas A. Runkler

Autonomously training interpretable control strategies, called policies, using pre-existing plant trajectory data is of great interest in industrial applications.

reinforcement-learning Reinforcement Learning (RL)

Interpretable Policies for Reinforcement Learning by Genetic Programming

no code implementations12 Dec 2017 Daniel Hein, Steffen Udluft, Thomas A. Runkler

Here we introduce the genetic programming for reinforcement learning (GPRL) approach based on model-based batch reinforcement learning and genetic programming, which autonomously learns policy equations from pre-existing default state-action trajectory samples.

regression reinforcement-learning +2

A Benchmark Environment Motivated by Industrial Control Problems

2 code implementations27 Sep 2017 Daniel Hein, Stefan Depeweg, Michel Tokic, Steffen Udluft, Alexander Hentschel, Thomas A. Runkler, Volkmar Sterzing

On one hand, these benchmarks are designed to provide interpretable RL training scenarios and detailed insight into the learning process of the method on hand.

OpenAI Gym Reinforcement Learning (RL)

Batch Reinforcement Learning on the Industrial Benchmark: First Experiences

no code implementations20 May 2017 Daniel Hein, Steffen Udluft, Michel Tokic, Alexander Hentschel, Thomas A. Runkler, Volkmar Sterzing

The Particle Swarm Optimization Policy (PSO-P) has been recently introduced and proven to produce remarkable results on interacting with academic reinforcement learning benchmarks in an off-policy, batch-based setting.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.