Search Results for author: Hongxin Li

Found 3 papers, 2 papers with code

MemoNav: Working Memory Model for Visual Navigation

1 code implementation29 Feb 2024 Hongxin Li, Zeyu Wang, Xu Yang, Yuran Yang, Shuqi Mei, Zhaoxiang Zhang

Subsequently, a graph attention module encodes the retained STM and the LTM to generate working memory (WM) which contains the scene features essential for efficient navigation.

Decision Making Graph Attention +2

Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving

1 code implementation29 Nov 2023 Yuqi Wang, JiaWei He, Lue Fan, Hongxin Li, Yuntao Chen, Zhaoxiang Zhang

In autonomous driving, predicting future events in advance and evaluating the foreseeable risks empowers autonomous vehicles to better plan their actions, enhancing safety and efficiency on the road.

Autonomous Driving

MemoNav: Selecting Informative Memories for Visual Navigation

no code implementations20 Aug 2022 Hongxin Li, Xu Yang, Yuran Yang, Shuqi Mei, Zhaoxiang Zhang

To address this limitation, we present the MemoNav, a novel memory mechanism for image-goal navigation, which retains the agent's informative short-term memory and long-term memory to improve the navigation performance on a multi-goal task.

Action Generation Graph Attention +2

Cannot find the paper you are looking for? You can Submit a new open access paper.