Search Results for author: Yanqing Shen

Found 5 papers, 2 papers with code

InteractionNet: Joint Planning and Prediction for Autonomous Driving with Transformers

1 code implementation7 Sep 2023 Jiawei Fu, Yanqing Shen, Zhiqiang Jian, Shitao Chen, Jingmin Xin, Nanning Zheng

Planning and prediction are two important modules of autonomous driving and have experienced tremendous advancement recently.

Autonomous Driving CARLA longest6

Complementing Onboard Sensors with Satellite Map: A New Perspective for HD Map Construction

1 code implementation29 Aug 2023 Wenjie Gao, Jiawei Fu, Yanqing Shen, Haodong Jing, Shitao Chen, Nanning Zheng

To enable better integration of satellite maps with existing methods, we propose a hierarchical fusion module, which includes feature-level fusion and BEV-level fusion.

Autonomous Driving Semantic Segmentation

MLF-DET: Multi-Level Fusion for Cross-Modal 3D Object Detection

no code implementations18 Jul 2023 Zewei Lin, Yanqing Shen, Sanping Zhou, Shitao Chen, Nanning Zheng

In this paper, we propose a novel and effective Multi-Level Fusion network, named as MLF-DET, for high-performance cross-modal 3D object DETection, which integrates both the feature-level fusion and decision-level fusion to fully utilize the information in the image.

3D Object Detection Data Augmentation +1

StructVPR: Distill Structural Knowledge with Weighting Samples for Visual Place Recognition

no code implementations CVPR 2023 Yanqing Shen, Sanping Zhou, Jingwen Fu, Ruotong Wang, Shitao Chen, Nanning Zheng

In this paper, we propose StructVPR, a novel training architecture for VPR, to enhance structural knowledge in RGB global features and thus improve feature stability in a constantly changing environment.

Image Retrieval Knowledge Distillation +3

TransVPR: Transformer-based place recognition with multi-level attention aggregation

no code implementations CVPR 2022 Ruotong Wang, Yanqing Shen, Weiliang Zuo, Sanping Zhou, Nanning Zheng

In addition, the output tokens from Transformer layers filtered by the fused attention mask are considered as key-patch descriptors, which are used to perform spatial matching to re-rank the candidates retrieved by the global image features.

Autonomous Driving Visual Place Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.