|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches.
The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks.
Since product images are readily available for a wide range of objects (e. g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data.
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping.
Ranked #5 on Robotic Grasping on Cornell Grasp Dataset
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.
Enter the RobotriX, an extremely photorealistic indoor dataset designed to enable the application of deep learning techniques to a wide variety of robotic vision problems.
The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing, addressing development in robotic vision and manipulation.
In this paper, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from n-channel image of the scene.
Ranked #1 on Robotic Grasping on Cornell Grasp Dataset
The hand's point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered.