Robotic Grasping

80 papers with code • 4 benchmarks • 16 datasets

This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.

Libraries

Use these libraries to find Robotic Grasping models and implementations

Most implemented papers

Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods

smrjan/robotic-vision 28 Feb 2018

In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping.

Jacquard: A Large Scale Dataset for Robotic Grasp Detection

TianheWu/LGPNet 30 Mar 2018

Jacquard is built on a subset of ShapeNet, a large CAD models dataset, and contains both RGB-D images and annotations of successful grasping positions based on grasp attempts performed in a simulated environment.

The RobotriX: An eXtremely Photorealistic and Very-Large-Scale Indoor Dataset of Sequences with Robot Trajectories and Interactions

3dperceptionlab/therobotrix 19 Jan 2019

Enter the RobotriX, an extremely photorealistic indoor dataset designed to enable the application of deep learning techniques to a wide variety of robotic vision problems.

Vision-based Robotic Grasping From Object Localization, Object Pose Estimation to Grasp Estimation for Parallel Grippers: A Review

GeorgeDu/vision-based-robotic-grasping 16 May 2019

We conclude three key tasks during vision-based robotic grasping, which are object localization, object pose estimation and grasp estimation.

Accept Synthetic Objects as Real: End-to-End Training of Attentive Deep Visuomotor Policies for Manipulation in Clutter

pouyaAB/Accept_Synthetic_Objects_as_Real 24 Sep 2019

In addition, we find that both ASOR-IA and ASOR-EA outperform previous approaches even in uncluttered environments, with ASOR-EA performing better even in clutter compared to the previous best baseline in an uncluttered environment.

Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics

mees/self-supervised-3D 17 Oct 2019

We present a convolutional neural network for joint 3D shape prediction and viewpoint estimation from a single input image.

IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks

clvrai/furniture 17 Nov 2019

The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks.

Reward Engineering for Object Pick and Place Training

ukachyuthan/Rewards_in_RL 11 Jan 2020

Reinforcement learning is the field of study where an agent learns a policy to execute an action by exploring and exploiting rewards from an environment.

EGAD! an Evolved Grasping Analysis Dataset for diversity and reproducibility in robotic manipulation

dougsm/egad 3 Mar 2020

We present the Evolved Grasping Analysis Dataset (EGAD), comprising over 2000 generated objects aimed at training and evaluating robotic visual grasp detection algorithms.

Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands

wenbowen123/icra20-hand-object-pose 7 Mar 2020

The hand's point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered.