no code implementations • 7 Mar 2024 • Fabian Otto, Philipp Becker, Vien Ang Ngo, Gerhard Neumann
Existing off-policy reinforcement learning algorithms typically necessitate an explicit state-action-value function representation, which becomes problematic in high-dimensional action spaces.
1 code implementation • 21 Jan 2024 • Ge Li, Hongyi Zhou, Dominik Roth, Serge Thilges, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann
Current advancements in reinforcement learning (RL) have predominantly focused on learning step-based policies that generate actions for each perceived state.
no code implementations • 22 Jun 2023 • Fabian Otto, Hongyi Zhou, Onur Celik, Ge Li, Rudolf Lioutikov, Gerhard Neumann
We introduce a novel deep reinforcement learning (RL) approach called Movement Primitive-based Planning Policy (MP3).
no code implementations • 10 Feb 2023 • Philipp Becker, Sebastian Markgraf, Fabian Otto, Gerhard Neumann
Combining inputs from multiple sensor modalities effectively in reinforcement learning (RL) is an open problem.
1 code implementation • 18 Oct 2022 • Fabian Otto, Onur Celik, Hongyi Zhou, Hanna Ziesche, Ngo Anh Vien, Gerhard Neumann
In this paper, we present a new algorithm for deep ERL.
no code implementations • 4 Oct 2022 • Ge Li, Zeqi Jin, Michael Volpp, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann
MPs can be broadly categorized into two types: (a) dynamics-based approaches that generate smooth trajectories from any initial state, e. g., Dynamic Movement Primitives (DMPs), and (b) probabilistic approaches that capture higher-order statistics of the motion, e. g., Probabilistic Movement Primitives (ProMPs).
1 code implementation • ICLR 2021 • Fabian Otto, Philipp Becker, Ngo Anh Vien, Hanna Carolin Ziesche, Gerhard Neumann
However, enforcing such trust regions in deep reinforcement learning is difficult.