Search Results for author: Vishnu Sashank Dorbala

Found 7 papers, 2 papers with code

Right Place, Right Time! Towards ObjectNav for Non-Stationary Goals

no code implementations14 Mar 2024 Vishnu Sashank Dorbala, Bhrij Patel, Amrit Singh Bedi, Dinesh Manocha

We address this concern by inferring results on two cases for object placement: one where the objects placed follow a routine or a path, and the other where they are placed at random.

Object Visual Grounding

Can an Embodied Agent Find Your "Cat-shaped Mug"? LLM-Guided Exploration for Zero-Shot Object Navigation

1 code implementation6 Mar 2023 Vishnu Sashank Dorbala, James F. Mullen Jr., Dinesh Manocha

We present LGX (Language-guided Exploration), a novel algorithm for Language-Driven Zero-Shot Object Goal Navigation (L-ZSON), where an embodied agent navigates to a uniquely described target object in a previously unseen environment.

Motion Planning Object +3

CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation

no code implementations30 Nov 2022 Vishnu Sashank Dorbala, Gunnar Sigurdsson, Robinson Piramuthu, Jesse Thomason, Gaurav S. Sukhatme

Our results on the coarse-grained instruction following task of REVERIE demonstrate the navigational capability of CLIP, surpassing the supervised baseline in terms of both success rate (SR) and success weighted by path length (SPL).

Instruction Following Object Recognition +1

Can a Robot Trust You? A DRL-Based Approach to Trust-Driven Human-Guided Navigation

no code implementations1 Nov 2020 Vishnu Sashank Dorbala, Arjun Srinivasan, Aniket Bera

We utilize both these trust metrics into an optimal cognitive reasoning scheme that decides when and when not to trust the given guidance.

Navigate Robot Navigation

ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for Socially-Aware Robot Navigation

1 code implementation2 Mar 2020 Venkatraman Narayanan, Bala Murali Manoghar, Vishnu Sashank Dorbala, Dinesh Manocha, Aniket Bera

Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation taking into account social and proxemic constraints.

Emotion Classification Emotion Recognition +3

A Deep Learning Approach for Robust Corridor Following

no code implementations18 Nov 2019 Vishnu Sashank Dorbala, A. H. Abdul Hafez, C. V. Jawahar

For an autonomous corridor following task where the environment is continuously changing, several forms of environmental noise prevent an automated feature extraction procedure from performing reliably.

Cannot find the paper you are looking for? You can Submit a new open access paper.