1 code implementation • 31 Jan 2024 • Ling-An Zeng, Wei-Shi Zheng
To leverage multimodal information for AQA, i. e., RGB, optical flow and audio information, we propose a Progressive Adaptive Multimodal Fusion Network (PAMFN) that separately models modality-specific information and mixed-modality information.
1 code implementation • 29 Sep 2023 • Yuan-Ming Li, Ling-An Zeng, Jing-Ke Meng, Wei-Shi Zheng
Our idea for modeling Continual-AQA is to sequentially learn a task-consistent score-discriminative feature distribution, in which the latent features express a strong correlation with the score labels regardless of the task or action types. From this perspective, we aim to mitigate the forgetting in Continual-AQA from two aspects.
no code implementations • CVPR 2022 • Angchi Xu, Ling-An Zeng, Wei-Shi Zheng
Long-term action quality assessment is a task of evaluating how well an action is performed, namely, estimating a quality score from a long video.
Ranked #1 on Action Quality Assessment on Rhythmic Gymnastic
2 code implementations • 13 Aug 2020 • Ling-An Zeng, Fa-Ting Hong, Wei-Shi Zheng, Qi-Zhi Yu, Wei Zeng, Yao-Wei Wang, Jian-Huang Lai
However, most existing works focus only on video dynamic information (i. e., motion information) but ignore the specific postures that an athlete is performing in a video, which is important for action assessment in long videos.
Ranked #2 on Action Quality Assessment on Rhythmic Gymnastic