Robotic Grasping
80 papers with code • 4 benchmarks • 16 datasets
This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.
Libraries
Use these libraries to find Robotic Grasping models and implementationsDatasets
Most implemented papers
Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping.
Jacquard: A Large Scale Dataset for Robotic Grasp Detection
Jacquard is built on a subset of ShapeNet, a large CAD models dataset, and contains both RGB-D images and annotations of successful grasping positions based on grasp attempts performed in a simulated environment.
The RobotriX: An eXtremely Photorealistic and Very-Large-Scale Indoor Dataset of Sequences with Robot Trajectories and Interactions
Enter the RobotriX, an extremely photorealistic indoor dataset designed to enable the application of deep learning techniques to a wide variety of robotic vision problems.
Vision-based Robotic Grasping From Object Localization, Object Pose Estimation to Grasp Estimation for Parallel Grippers: A Review
We conclude three key tasks during vision-based robotic grasping, which are object localization, object pose estimation and grasp estimation.
Accept Synthetic Objects as Real: End-to-End Training of Attentive Deep Visuomotor Policies for Manipulation in Clutter
In addition, we find that both ASOR-IA and ASOR-EA outperform previous approaches even in uncluttered environments, with ASOR-EA performing better even in clutter compared to the previous best baseline in an uncluttered environment.
Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics
We present a convolutional neural network for joint 3D shape prediction and viewpoint estimation from a single input image.
IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks
The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks.
Reward Engineering for Object Pick and Place Training
Reinforcement learning is the field of study where an agent learns a policy to execute an action by exploring and exploiting rewards from an environment.
EGAD! an Evolved Grasping Analysis Dataset for diversity and reproducibility in robotic manipulation
We present the Evolved Grasping Analysis Dataset (EGAD), comprising over 2000 generated objects aimed at training and evaluating robotic visual grasp detection algorithms.
Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands
The hand's point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered.