Vision-and-Dialog Navigation

10 Jul 2019  ·  Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer ·

Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Navigation Cooperative Vision-and-Dialogue Navigation Seq2Seq Baseline dist_to_end_reduction 2.35 # 18
spl 0.16 # 6
Visual Navigation Cooperative Vision-and-Dialogue Navigation Pansy dist_to_end_reduction 1.76 # 19
spl 0.15 # 7

Methods


No methods listed for this paper. Add relevant methods here