Vision-based navigation with language-based assistance

2 papers with code • 0 benchmarks • 0 datasets

A grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects and thus makes requests by only specifying high-level endgoals, and (b) the agent is capable of sensing when it is lost and querying an advisor, who is more qualified at the task, to obtain language subgoals to make progress.

Most implemented papers

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

debadeepta/vnla CVPR 2019

We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.

BankNote-Net: Open dataset for assistive universal currency recognition

microsoft/banknote-net 7 Apr 2022

This last task, the recognition of banknotes from different denominations, has been addressed by the use of computer vision models for image recognition.