Robotic Grasping
78 papers with code • 4 benchmarks • 16 datasets
This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.
Datasets
Latest papers with no code
Speeding up 6-DoF Grasp Sampling with Quality-Diversity
We believe these results to be a significant step toward the generation of large datasets that can lead to robust and generalizing robotic grasping policies.
Grasping Trajectory Optimization with Point Clouds
The task space of a robot is represented by a point cloud that can be obtained from depth sensors.
PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models
With these two capabilities, PhyGrasp is able to accurately assess the physical properties of object parts and determine optimal grasping poses.
Jacquard V2: Refining Datasets using the Human In the Loop Data Correction Method
The enhanced dataset, named the Jacquard V2 Grasping Dataset, served as the training data for a range of neural networks.
Robust Analysis of Multi-Task Learning on a Complex Vision System
(2) We empirically compare the method performance when applied on feature-level gradients versus parameter-level gradients over a large set of MTL optimization algorithms, and conclude that this feature-level gradients surrogate is reasonable when there are method-specific theoretical guarantee but not generalizable to all methods.
Physics-Encoded Graph Neural Networks for Deformation Prediction under Contact
We also incorporate cross-attention mechanisms to capture the interplay between the objects.
AGILE: Approach-based Grasp Inference Learned from Element Decomposition
The proposed method acquires a 90% grasp success rate on seen objects and 78% on unseen objects in the Coppeliasim simulation environment.
Synthetic data enables faster annotation and robust segmentation for multi-object grasping in clutter
In this work, we propose a synthetic data generation method that minimizes human intervention and makes downstream image segmentation algorithms more robust by combining a generated synthetic dataset with a smaller real-world dataset (hybrid dataset).
Reinforcement Learning-Based Bionic Reflex Control for Anthropomorphic Robotic Grasping exploiting Domain Randomization
In this study, we introduce an innovative bionic reflex control pipeline, leveraging reinforcement learning (RL); thereby eliminating the need for human intervention during control design.
FViT-Grasp: Grasping Objects With Using Fast Vision Transformers
This study addresses the challenge of manipulation, a prominent issue in robotics.