no code implementations • 23 Apr 2024 • Sassan Mokhtar, Eugenio Chisari, Nick Heppert, Abhinav Valada
Precisely grasping and reconstructing articulated objects is key to enabling general robotic manipulation.
no code implementations • 22 Mar 2024 • Nick Heppert, Max Argus, Tim Welschehold, Thomas Brox, Abhinav Valada
Subsequently, in the live online trajectory generation stage, we first \mbox{re-detect} all objects, then we warp the demonstration trajectory to the current scene, and finally, we trace the trajectory with the robot.
no code implementations • 22 Mar 2024 • Adrian Röfer, Nick Heppert, Abdallah Ayman, Eugenio Chisari, Abhinav Valada
We frame this problem as the task of learning a low-dimensional visual-tactile embedding, wherein we encode a depth patch from which we decode the tactile signal.
1 code implementation • 13 Dec 2023 • Eugenio Chisari, Nick Heppert, Tim Welschehold, Wolfram Burgard, Abhinav Valada
It consists of an RGB-D image encoder that leverages recent advances to detect objects and infer their pose and latent code, and a decoder to predict shape and grasps for each object in the scene.
1 code implementation • CVPR 2023 • Nick Heppert, Muhammad Zubair Irshad, Sergey Zakharov, Katherine Liu, Rares Andrei Ambrus, Jeannette Bohg, Abhinav Valada, Thomas Kollar
We present CARTO, a novel approach for reconstructing multiple articulated objects from a single stereo RGB observation.
no code implementations • 7 May 2022 • Nick Heppert, Toki Migimatsu, Brent Yi, Claire Chen, Jeannette Bohg
Robots deployed in human-centric environments may need to manipulate a diverse range of articulated objects, such as doors, dishwashers, and cabinets.