Search Results for author: Qingqing Yan

Found 3 papers, 1 papers with code

Efficient Text-driven Motion Generation via Latent Consistency Training

no code implementations5 May 2024 Mengxian Hu, Minghao Zhu, Xun Zhou, Qingqing Yan, Shu Li, Chengju Liu, Qijun Chen

Motion diffusion models have recently proven successful for text-driven human motion generation.

Quantization

PASTS: Progress-Aware Spatio-Temporal Transformer Speaker For Vision-and-Language Navigation

no code implementations19 May 2023 Liuyi Wang, Chengju Liu, Zongtao He, Shu Li, Qingqing Yan, Huiyi Chen, Qijun Chen

The experimental results demonstrate that PASTS outperforms all existing speaker models and successfully improves the performance of previous VLN models, achieving state-of-the-art performance on the standard Room-to-Room (R2R) dataset.

Data Augmentation Vision and Language Navigation

MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation

1 code implementation2 Mar 2023 Zongtao He, Liuyi Wang, Shu Li, Qingqing Yan, Chengju Liu, Qijun Chen

For a better performance in continuous VLN, we design a multi-level instruction understanding procedure and propose a novel model, Multi-Level Attention Network (MLANet).

Navigate Vision and Language Navigation

Cannot find the paper you are looking for? You can Submit a new open access paper.