Robotic Grasping
80 papers with code • 4 benchmarks • 16 datasets
This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.
Libraries
Use these libraries to find Robotic Grasping models and implementationsDatasets
Latest papers with no code
Synthetic data enables faster annotation and robust segmentation for multi-object grasping in clutter
In this work, we propose a synthetic data generation method that minimizes human intervention and makes downstream image segmentation algorithms more robust by combining a generated synthetic dataset with a smaller real-world dataset (hybrid dataset).
Reinforcement Learning-Based Bionic Reflex Control for Anthropomorphic Robotic Grasping exploiting Domain Randomization
In this study, we introduce an innovative bionic reflex control pipeline, leveraging reinforcement learning (RL); thereby eliminating the need for human intervention during control design.
FViT-Grasp: Grasping Objects With Using Fast Vision Transformers
This study addresses the challenge of manipulation, a prominent issue in robotics.
A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in Cluttered Environments
The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem.
Robotic Handling of Compliant Food Objects by Robust Learning from Demonstration
To this end, we propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects.
Representation Abstractions as Incentives for Reinforcement Learning Agents: A Robotic Grasping Case Study
The results show that RL agents using numerical states can perform on par with non-learning baselines.
WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model
The target instruction is then forwarded to a visual grounding system for object pose and size estimation, following which the robot grasps the object accordingly.
Instance segmentation based 6D pose estimation of industrial objects using point clouds for robotic bin-picking
3D object pose estimation for robotic grasping and manipulation is a crucial task in the manufacturing industry.
DMFC-GraspNet: Differentiable Multi-Fingered Robotic Grasp Generation in Cluttered Scenes
The results demonstrate the effectiveness of the proposed approach in predicting versatile and dense grasps, and in advancing the field of multi-fingered robotic grasping.
Learning Any-View 6DoF Robotic Grasping in Cluttered Scenes via Neural Surface Rendering
A significant challenge for real-world robotic manipulation is the effective 6DoF grasping of objects in cluttered scenes from any single viewpoint without the need for additional scene exploration.