no code implementations • 11 Dec 2022 • Ankit Kumar, Priya Shukla, Vandana Kushwaha, G. C. Nandi
In this paper, we present an architecture that, unlike prior work, is context-aware.
no code implementations • 6 Nov 2021 • Priya Shukla, Vandana Kushwaha, G. C. Nandi
In the case of robots, we can not afford to spend that much time on making it to learn how to grasp objects effectively.
no code implementations • 15 Jul 2021 • Priya Shukla, Nilotpal Pramanik, Deepesh Mehta, G. C. Nandi
It is trained on Cornell Grasping Dataset (CGD) and attained 98. 87% grasp pose accuracy for detecting both regular and irregular shaped objects from RGB-Depth (RGB-D) images while requiring only one third of the network trainable parameters as compared to the existing approaches.
no code implementations • 23 Jan 2020 • Mridul Mahajan, Tryambak Bhattacharjee, Arya Krishnan, Priya Shukla, G. C. Nandi
However, vision based robotic grasp detection is hindered by the unavailability of sufficient labelled data.
no code implementations • 15 Jan 2020 • Priya Shukla, Hitesh Kumar, G. C. Nandi
Further for grasp orientation learning, we develop a deep reinforcement learning (DRL) model which we name as Grasp Deep Q-Network (GDQN) and benchmarked our results with Modified VGG16 (MVGG16).