Robotic Grasping

78 papers with code • 4 benchmarks • 16 datasets

This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.

Latest papers with no code

Speeding up 6-DoF Grasp Sampling with Quality-Diversity

no code yet • 10 Mar 2024

We believe these results to be a significant step toward the generation of large datasets that can lead to robust and generalizing robotic grasping policies.

Grasping Trajectory Optimization with Point Clouds

no code yet • 8 Mar 2024

The task space of a robot is represented by a point cloud that can be obtained from depth sensors.

PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large Multimodal Models

no code yet • 26 Feb 2024

With these two capabilities, PhyGrasp is able to accurately assess the physical properties of object parts and determine optimal grasping poses.

Jacquard V2: Refining Datasets using the Human In the Loop Data Correction Method

no code yet • 8 Feb 2024

The enhanced dataset, named the Jacquard V2 Grasping Dataset, served as the training data for a range of neural networks.

Robust Analysis of Multi-Task Learning on a Complex Vision System

no code yet • 5 Feb 2024

(2) We empirically compare the method performance when applied on feature-level gradients versus parameter-level gradients over a large set of MTL optimization algorithms, and conclude that this feature-level gradients surrogate is reasonable when there are method-specific theoretical guarantee but not generalizable to all methods.

Physics-Encoded Graph Neural Networks for Deformation Prediction under Contact

no code yet • 5 Feb 2024

We also incorporate cross-attention mechanisms to capture the interplay between the objects.

AGILE: Approach-based Grasp Inference Learned from Element Decomposition

no code yet • 2 Feb 2024

The proposed method acquires a 90% grasp success rate on seen objects and 78% on unseen objects in the Coppeliasim simulation environment.

Synthetic data enables faster annotation and robust segmentation for multi-object grasping in clutter

no code yet • 24 Jan 2024

In this work, we propose a synthetic data generation method that minimizes human intervention and makes downstream image segmentation algorithms more robust by combining a generated synthetic dataset with a smaller real-world dataset (hybrid dataset).

Reinforcement Learning-Based Bionic Reflex Control for Anthropomorphic Robotic Grasping exploiting Domain Randomization

no code yet • 8 Dec 2023

In this study, we introduce an innovative bionic reflex control pipeline, leveraging reinforcement learning (RL); thereby eliminating the need for human intervention during control design.

FViT-Grasp: Grasping Objects With Using Fast Vision Transformers

no code yet • 23 Nov 2023

This study addresses the challenge of manipulation, a prominent issue in robotics.