Search Results for author: Manfred Eppe

Found 20 papers, 4 papers with code

Scilab-RL: A software framework for efficient reinforcement learning and cognitive modeling research

no code implementations25 Jan 2024 Jan Dohmen, Frank Röder, Manfred Eppe

One problem with researching cognitive modeling and reinforcement learning (RL) is that researchers spend too much time on setting up an appropriate computational framework for their experiments.

Data Visualization Hyperparameter Optimization +3

Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action Corrections

1 code implementation18 Nov 2022 Frank Röder, Manfred Eppe

To evaluate our approach, we propose a collection of benchmark environments for action correction in language-conditioned reinforcement learning, utilizing a synthetic instructor to generate language goals and their corresponding corrections.

Instruction Following reinforcement-learning +1

Intelligent problem-solving as integrated hierarchical reinforcement learning

no code implementations18 Aug 2022 Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D. H. Nguyen, Martin V. Butz, Stefan Wermter

According to cognitive psychology and related disciplines, the development of complex problem-solving behaviour in biological agents depends on hierarchical cognitive mechanisms.

Hierarchical Reinforcement Learning reinforcement-learning +1

Intelligent behavior depends on the ecological niche: Scaling up AI to human-like intelligence in socio-cultural environments

no code implementations11 Mar 2021 Manfred Eppe, Pierre-Yves Oudeyer

This paper outlines a perspective on the future of AI, discussing directions for machines models of human-like intelligence.

Hierarchical principles of embodied reinforcement learning: A review

no code implementations18 Dec 2020 Manfred Eppe, Christian Gumbsch, Matthias Kerzel, Phuong D. H. Nguyen, Martin V. Butz, Stefan Wermter

We then relate these insights with contemporary hierarchical reinforcement learning methods, and identify the key machine intelligence approaches that realise these mechanisms.

Hierarchical Reinforcement Learning reinforcement-learning +1

Sensorimotor representation learning for an "active self" in robots: A model survey

no code implementations25 Nov 2020 Phuong D. H. Nguyen, Yasmin Kim Georgie, Ezgi Kayhan, Manfred Eppe, Verena Vanessa Hafner, Stefan Wermter

Safe human-robot interactions require robots to be able to learn how to behave appropriately in \sout{humans' world} \rev{spaces populated by people} and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations.

Representation Learning

Enhancing a Neurocognitive Shared Visuomotor Model for Object Identification, Localization, and Grasping With Learning From Auxiliary Tasks

1 code implementation26 Sep 2020 Matthias Kerzel, Fares Abawi, Manfred Eppe, Stefan Wermter

In this follow-up study, we expand the task and the model to reaching for objects in a three-dimensional space with a novel dataset based on augmented reality and a simulation environment.

Curious Hierarchical Actor-Critic Reinforcement Learning

1 code implementation7 May 2020 Frank Röder, Manfred Eppe, Phuong D. H. Nguyen, Stefan Wermter

Hierarchical abstraction and curiosity-driven exploration are two common paradigms in current reinforcement learning approaches to break down difficult problems into a sequence of simpler ones and to overcome reward sparsity.

Benchmarking Hierarchical Reinforcement Learning +2

From semantics to execution: Integrating action planning with reinforcement learning for robotic causal problem-solving

no code implementations23 May 2019 Manfred Eppe, Phuong D. H. Nguyen, Stefan Wermter

In this article, we build on these novel methods to facilitate the integration of action planning with reinforcement learning by exploiting the reward-sparsity as a bridge between the high-level and low-level state- and control spaces.

reinforcement-learning Reinforcement Learning (RL)

Unsupervised Expectation Learning for Multisensory Binding

no code implementations27 Sep 2018 Pablo Barros, German I. Parisi, Manfred Eppe, Stefan Wermter

The model adapts concepts of expectation learning to enhance the unisensory representation based on the learned bindings.

Curriculum goal masking for continuous deep reinforcement learning

no code implementations17 Sep 2018 Manfred Eppe, Sven Magg, Stefan Wermter

Deep reinforcement learning has recently gained a focus on problems where policy or value functions are independent of goals.

reinforcement-learning Reinforcement Learning (RL)

Grounding Dynamic Spatial Relations for Embodied (Robot) Interaction

no code implementations26 Jul 2016 Michael Spranger, Jakob Suchan, Mehul Bhatt, Manfred Eppe

This paper presents a computational model of the processing of dynamic spatial relations occurring in an embodied robotic interaction setup.

Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction

no code implementations22 Apr 2016 Manfred Eppe, Sean Trott, Jerome Feldman

We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language.

Tractable Epistemic Reasoning with Functional Fluents, Static Causal Laws and Postdiction

no code implementations1 Mar 2014 Manfred Eppe

We present an epistemic action theory for tractable epistemic reasoning as an extension to the h-approximation (HPX) theory.

Epistemic Reasoning

Narrative based Postdictive Reasoning for Cognitive Robotics

no code implementations4 Jun 2013 Manfred Eppe, Mehul Bhatt

Making sense of incomplete and conflicting narrative knowledge in the presence of abnormalities, unobservable processes, and other real world considerations is a challenge and crucial requirement for cognitive robotics systems.

Anomaly Detection Translation

h-approximation: History-Based Approximation of Possible World Semantics as ASP

no code implementations17 Apr 2013 Manfred Eppe, Mehul Bhatt, Frank Dylla

We propose an approximation of the Possible Worlds Semantics (PWS) for action planning.

Cannot find the paper you are looking for? You can Submit a new open access paper.