no code implementations • 9 Dec 2023 • Motoya Ohnishi, Iretiayo Akinola, Jie Xu, Ajay Mandlekar, Fabio Ramos
As a specific case of our framework, we devise a model predictive control method for path tracking.
no code implementations • 26 Oct 2023 • Ajay Mandlekar, Soroush Nasiriany, Bowen Wen, Iretiayo Akinola, Yashraj Narang, Linxi Fan, Yuke Zhu, Dieter Fox
Imitation learning from a large set of human demonstrations has proved to be an effective paradigm for building capable robot agents.
no code implementations • 16 Jun 2023 • Chad DeChant, Iretiayo Akinola, Daniel Bauer
We therefore demonstrate the task of learning to summarize and answer questions about a robot agent's past actions using natural language alone.
1 code implementation • 7 May 2022 • Yashraj Narang, Kier Storey, Iretiayo Akinola, Miles Macklin, Philipp Reist, Lukasz Wawrzyniak, Yunrong Guo, Adam Moravanszky, Gavriel State, Michelle Lu, Ankur Handa, Dieter Fox
We aim for Factory to open the doors to using simulation for robotic assembly, as well as many other contact-rich applications in robotics.
no code implementations • 31 Mar 2022 • Wei Yang, Balakumar Sundaralingam, Chris Paxton, Iretiayo Akinola, Yu-Wei Chao, Maya Cakmak, Dieter Fox
However, how to responsively generate smooth motions to take an object from a human is still an open question.
no code implementations • 26 Mar 2021 • Iretiayo Akinola, Anelia Angelova, Yao Lu, Yevgen Chebotar, Dmitry Kalashnikov, Jacob Varley, Julian Ibarz, Michael S. Ryoo
We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs.
no code implementations • 24 Mar 2021 • Iretiayo Akinola, Zizhao Wang, Peter Allen
We propose a vision-based reinforcement learning (RL) approach for closed-loop trajectory generation in an arm reaching problem.
no code implementations • 21 Feb 2020 • Iretiayo Akinola, Jacob Varley, Dmitry Kalashnikov
In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature.
no code implementations • 10 Sep 2019 • Bohan Wu, Iretiayo Akinola, Jacob Varley, Peter Allen
When this methodology is used to realize grasps from coarse initial positions provided by a vision-only planner, the system is made dramatically more robust to calibration errors in the camera-robot transform.
no code implementations • 8 Mar 2019 • Bohan Wu, Iretiayo Akinola, Peter K. Allen
Recent advances in on-policy reinforcement learning (RL) methods enabled learning agents in virtual environments to master complex tasks with high-dimensional and continuous observation and action spaces.
1 code implementation • 29 Jun 2018 • Iretiayo Akinola, Jacob Varley, Boyuan Chen, Peter K. Allen
This framework greatly improves the performance of standard online grasp planning algorithms by incorporating a notion of reachability into the online grasp planning process.
Robotics