Search Results for author: Daniel Angelov

Found 5 papers, 0 papers with code

Learning rewards for robotic ultrasound scanning using probabilistic temporal ranking

no code implementations4 Feb 2020 Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy

This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy.

Composing Diverse Policies for Temporally Extended Tasks

no code implementations18 Jul 2019 Daniel Angelov, Yordan Hristov, Michael Burke, Subramanian Ramamoorthy

Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics.

Hierarchical Reinforcement Learning Motion Planning

DynoPlan: Combining Motion Planning and Deep Neural Network based Controllers for Safe HRL

no code implementations24 Jun 2019 Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy

Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke primitives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states.

Robotics

Using Causal Analysis to Learn Specifications from Task Demonstrations

no code implementations4 Mar 2019 Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy

In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.