Search Results for author: Iretiayo Akinola

Found 11 papers, 2 papers with code

MimicGen: A Data Generation System for Scalable Robot Learning using Human Demonstrations

no code implementations26 Oct 2023 Ajay Mandlekar, Soroush Nasiriany, Bowen Wen, Iretiayo Akinola, Yashraj Narang, Linxi Fan, Yuke Zhu, Dieter Fox

Imitation learning from a large set of human demonstrations has proved to be an effective paradigm for building capable robot agents.

Imitation Learning

Learning to Summarize and Answer Questions about a Virtual Robot's Past Actions

no code implementations16 Jun 2023 Chad DeChant, Iretiayo Akinola, Daniel Bauer

We therefore demonstrate the task of learning to summarize and answer questions about a robot agent's past actions using natural language alone.

Language Modelling Large Language Model +1

Factory: Fast Contact for Robotic Assembly

1 code implementation7 May 2022 Yashraj Narang, Kier Storey, Iretiayo Akinola, Miles Macklin, Philipp Reist, Lukasz Wawrzyniak, Yunrong Guo, Adam Moravanszky, Gavriel State, Michelle Lu, Ankur Handa, Dieter Fox

We aim for Factory to open the doors to using simulation for robotic assembly, as well as many other contact-rich applications in robotics.

Visionary: Vision architecture discovery for robot learning

no code implementations26 Mar 2021 Iretiayo Akinola, Anelia Angelova, Yao Lu, Yevgen Chebotar, Dmitry Kalashnikov, Jacob Varley, Julian Ibarz, Michael S. Ryoo

We propose a vision-based architecture search algorithm for robot manipulation learning, which discovers interactions between low dimension action inputs and high dimensional visual inputs.

Neural Architecture Search Robot Manipulation

CLAMGen: Closed-Loop Arm Motion Generation via Multi-view Vision-Based RL

no code implementations24 Mar 2021 Iretiayo Akinola, Zizhao Wang, Peter Allen

We propose a vision-based reinforcement learning (RL) approach for closed-loop trajectory generation in an arm reaching problem.

Collision Avoidance Reinforcement Learning (RL)

Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras

no code implementations21 Feb 2020 Iretiayo Akinola, Jacob Varley, Dmitry Kalashnikov

In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature.

Camera Calibration

MAT: Multi-Fingered Adaptive Tactile Grasping via Deep Reinforcement Learning

no code implementations10 Sep 2019 Bohan Wu, Iretiayo Akinola, Jacob Varley, Peter Allen

When this methodology is used to realize grasps from coarse initial positions provided by a vision-only planner, the system is made dramatically more robust to calibration errors in the camera-robot transform.

reinforcement-learning Reinforcement Learning (RL)

Pixel-Attentive Policy Gradient for Multi-Fingered Grasping in Cluttered Scenes

no code implementations8 Mar 2019 Bohan Wu, Iretiayo Akinola, Peter K. Allen

Recent advances in on-policy reinforcement learning (RL) methods enabled learning agents in virtual environments to master complex tasks with high-dimensional and continuous observation and action spaces.

reinforcement-learning Reinforcement Learning (RL) +1

Workspace Aware Online Grasp Planning

1 code implementation29 Jun 2018 Iretiayo Akinola, Jacob Varley, Boyuan Chen, Peter K. Allen

This framework greatly improves the performance of standard online grasp planning algorithms by incorporating a notion of reachability into the online grasp planning process.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.