no code implementations • 23 Mar 2019 • Anh Nguyen, Thanh-Toan Do, Ian Reid, Darwin G. Caldwell, Nikos G. Tsagarakis
We propose V2CNet, a new deep learning framework to automatically translate the demonstration videos to commands that can be directly used in robotic applications.
1 code implementation • 16 Mar 2018 • Anh Nguyen, Thanh-Toan Do, Ian Reid, Darwin G. Caldwell, Nikos G. Tsagarakis
The key idea of our approach is the use of object descriptions to provide the detailed understanding of an object.
no code implementations • 1 Oct 2017 • Anh Nguyen, Dimitrios Kanoulas, Luca Muratore, Darwin G. Caldwell, Nikos G. Tsagarakis
We present a new method to translate videos to commands for robotic manipulation using Deep Recurrent Neural Networks (RNN).
1 code implementation • 22 Aug 2017 • Anh Nguyen, Thanh-Toan Do, Darwin G. Caldwell, Nikos G. Tsagarakis
Our method first creates the event image from a list of events that occurs in a very short time interval, then a Stacked Spatial LSTM Network (SP-LSTM) is used to learn the camera pose.