Robotic Grasping
80 papers with code • 4 benchmarks • 16 datasets
This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.
Libraries
Use these libraries to find Robotic Grasping models and implementationsDatasets
Most implemented papers
Composing Pick-and-Place Tasks By Grounding Language
Controlling robots to perform tasks via natural language is one of the most challenging topics in human-robot interaction.
The Role of Tactile Sensing in Learning and Deploying Grasp Refinement Algorithms
Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands.
Solving the Real Robot Challenge using Deep Reinforcement Learning
This paper details our winning submission to Phase 1 of the 2021 Real Robot Challenge; a challenge in which a three-fingered robot must carry a cube along specified goal trajectories.
You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration
The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video.
Causal Counterfactuals for Improving the Robustness of Reinforcement Learning
We apply CausalCF to complex robotic tasks and show that it improves the RL agent's robustness using CausalWorld.
3D Semantic Segmentation of Modular Furniture using rjMCMC
In our approach we jointly estimate the number of functional units, their spatial structure, and their corresponding labels by using reversible jump MCMC (rjMCMC), a method well suited for optimization on spaces of varying dimensions (the number of structural elements).
A Fast Method For Computing Principal Curvatures From Range Images
In particular we compare our method to several alternatives to demonstrate the improvement.
Transferring End-to-End Visuomotor Control from Simulation to Real World for a Multi-Stage Task
End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches.
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
We extensively evaluate our approaches with a total of more than 25, 000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN.
The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?
In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch.