no code implementations • 4 Feb 2020 • Michael Burke, Katie Lu, Daniel Angelov, Artūras Straižys, Craig Innes, Kartic Subr, Subramanian Ramamoorthy
This work considers the inverse problem, where the goal of the task is unknown, and a reward function needs to be inferred from exploratory example demonstrations provided by a demonstrator, for use in a downstream informative path-planning policy.
no code implementations • 31 Jul 2019 • Yordan Hristov, Daniel Angelov, Michael Burke, Alex Lascarides, Subramanian Ramamoorthy
Learning from demonstration is an effective method for human users to instruct desired robot behaviour.
no code implementations • 18 Jul 2019 • Daniel Angelov, Yordan Hristov, Michael Burke, Subramanian Ramamoorthy
Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics.
no code implementations • 24 Jun 2019 • Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy
Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke primitives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states.
Robotics
no code implementations • 4 Mar 2019 • Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy
In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space.