no code implementations • ICLR 2019 • Deepak Pathak, Dhiraj Gandhi, Abhinav Gupta
But most importantly, we are able to implement an exploration policy on a robot which learns to interact with objects completely from scratch just using data collected via the differentiable exploration module.
1 code implementation • 25 Jan 2021 • Anurag Pratik, Soumith Chintala, Kavya Srinet, Dhiraj Gandhi, Rebecca Qian, Yuxuan Sun, Ryan Drew, Sara Elkafrawy, Anoushka Tiwari, Tucker Hart, Mary Williamson, Abhinav Gupta, Arthur Szlam
In recent years, there have been significant advances in building end-to-end Machine Learning (ML) systems that learn at scale.
no code implementations • 11 Aug 2020 • Sarah Young, Dhiraj Gandhi, Shubham Tulsiani, Abhinav Gupta, Pieter Abbeel, Lerrel Pinto
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
no code implementations • 3 Jul 2020 • Dhiraj Gandhi, Abhinav Gupta, Lerrel Pinto
In this work, we perform the first large-scale study of the interactions between sound and robotic action.
2 code implementations • NeurIPS 2020 • Devendra Singh Chaplot, Dhiraj Gandhi, Abhinav Gupta, Ruslan Salakhutdinov
We propose a modular system called, `Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category.
Ranked #4 on Robot Navigation on Habitat 2020 Object Nav test-std
2 code implementations • ICLR 2020 • Devendra Singh Chaplot, Dhiraj Gandhi, Saurabh Gupta, Abhinav Gupta, Ruslan Salakhutdinov
The use of learning provides flexibility with respect to input modalities (in the SLAM module), leverages structural regularities of the world (in global policies), and provides robustness to errors in state estimation (in local policies).
1 code implementation • 8 Oct 2019 • Yufei Ye, Dhiraj Gandhi, Abhinav Gupta, Shubham Tulsiani
We present an approach to learn an object-centric forward model, and show that this allows us to plan for sequences of actions to achieve distant desired goals.
no code implementations • 25 Sep 2019 • Dhiraj Gandhi, Abhinav Gupta, Lerrel Pinto
In this work, we perform the first large-scale study of the interactions between sound and robotic action.
2 code implementations • 19 Jun 2019 • Adithyavairavan Murali, Tao Chen, Kalyan Vasudev Alwala, Dhiraj Gandhi, Lerrel Pinto, Saurabh Gupta, Abhinav Gupta
This paper introduces PyRobot, an open-source robotics framework for research and benchmarking.
2 code implementations • 10 Jun 2019 • Deepak Pathak, Dhiraj Gandhi, Abhinav Gupta
In this paper, we propose a formulation for exploration inspired by the work in active learning literature.
no code implementations • NeurIPS 2018 • Abhinav Gupta, Adithyavairavan Murali, Dhiraj Gandhi, Lerrel Pinto
The models trained with our home dataset showed a marked improvement of 43. 7% over a baseline model trained with data collected in lab.
no code implementations • 10 May 2018 • Adithyavairavan Murali, Yin Li, Dhiraj Gandhi, Abhinav Gupta
We believe this is the first attempt at learning to grasp with only tactile sensing and without any prior object knowledge.
no code implementations • 4 Aug 2017 • Adithyavairavan Murali, Lerrel Pinto, Dhiraj Gandhi, Abhinav Gupta
Recent self-supervised learning approaches focus on using a few thousand data points to learn policies for high-level, low-dimensional action spaces.
1 code implementation • 19 Apr 2017 • Dhiraj Gandhi, Lerrel Pinto, Abhinav Gupta
An alternative is to use simulation.
no code implementations • 5 Apr 2016 • Lerrel Pinto, Dhiraj Gandhi, Yuanfeng Han, Yong-Lae Park, Abhinav Gupta
We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web).