Search Results for author: Fabian Otto

Found 7 papers, 3 papers with code

Vlearn: Off-Policy Learning with Efficient State-Value Function Estimation

no code implementations7 Mar 2024 Fabian Otto, Philipp Becker, Vien Ang Ngo, Gerhard Neumann

Existing off-policy reinforcement learning algorithms typically necessitate an explicit state-action-value function representation, which becomes problematic in high-dimensional action spaces.

Efficient Exploration

Open the Black Box: Step-based Policy Updates for Temporally-Correlated Episodic Reinforcement Learning

1 code implementation21 Jan 2024 Ge Li, Hongyi Zhou, Dominik Roth, Serge Thilges, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann

Current advancements in reinforcement learning (RL) have predominantly focused on learning step-based policies that generate actions for each perceived state.

Reinforcement Learning (RL)

MP3: Movement Primitive-Based (Re-)Planning Policy

no code implementations22 Jun 2023 Fabian Otto, Hongyi Zhou, Onur Celik, Ge Li, Rudolf Lioutikov, Gerhard Neumann

We introduce a novel deep reinforcement learning (RL) approach called Movement Primitive-based Planning Policy (MP3).

Reinforcement Learning (RL)

ProDMPs: A Unified Perspective on Dynamic and Probabilistic Movement Primitives

no code implementations4 Oct 2022 Ge Li, Zeqi Jin, Michael Volpp, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann

MPs can be broadly categorized into two types: (a) dynamics-based approaches that generate smooth trajectories from any initial state, e. g., Dynamic Movement Primitives (DMPs), and (b) probabilistic approaches that capture higher-order statistics of the motion, e. g., Probabilistic Movement Primitives (ProMPs).

Numerical Integration

Cannot find the paper you are looking for? You can Submit a new open access paper.