Robotic Grasping
80 papers with code • 4 benchmarks • 16 datasets
This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.
Libraries
Use these libraries to find Robotic Grasping models and implementationsDatasets
Latest papers
Inverse Kinematics for Neuro-Robotic Grasping with Humanoid Embodied Agents
We generalize the embodied agent, that was introduced for NICOL, to also be embodied by NICO.
Generalizing 6-DoF Grasp Detection via Domain Prior Knowledge
We focus on the generalization ability of the 6-DoF grasp detection method in this paper.
GaussianGrasper: 3D Language Gaussian Splatting for Open-vocabulary Robotic Grasping
In particular, we propose an Efficient Feature Distillation (EFD) module that employs contrastive learning to efficiently and accurately distill language embeddings derived from foundational models.
STAR: Shape-focused Texture Agnostic Representations for Improved Object Detection and 6D Pose Estimation
To achieve a focus on learning shape features, the textures are randomized during the rendering of the training data.
PGA: Personalizing Grasping Agents with Single Human-Robot Interaction
Based on the acquired information, PGA pseudo-labels objects in the Reminiscence by our proposed label propagation algorithm.
Quality Diversity through Human Feedback
Meanwhile, Quality Diversity (QD) algorithms excel at identifying diverse and high-quality solutions but often rely on manually crafted diversity metrics.
Domain Randomization for Sim2real Transfer of Automatically Generated Grasping Datasets
More than 7000 reach-and-grasp trajectories have been generated with Quality-Diversity (QD) methods on 3 different arms and grippers, including parallel fingers and a dexterous hand, and tested in the real world.
Toward a Plug-and-Play Vision-Based Grasping Module for Robotics
This framework addresses two main issues: the lack of an off-the-shelf vision module for detecting object pose and the generalization of QD trajectories to the whole robot operational space.
Grasp-Anything: Large-scale Grasp Dataset from Foundation Models
Foundation models such as ChatGPT have made significant strides in robotic tasks due to their universal representation of real-world domains.
SCENEREPLICA: Benchmarking Real-World Robot Manipulation by Creating Replicable Scenes
We present a new reproducible benchmark for evaluating robot manipulation in the real world, specifically focusing on pick-and-place.