Search Results for author: Ling-An Zeng

Found 4 papers, 3 papers with code

Multimodal Action Quality Assessment

1 code implementation31 Jan 2024 Ling-An Zeng, Wei-Shi Zheng

To leverage multimodal information for AQA, i. e., RGB, optical flow and audio information, we propose a Progressive Adaptive Multimodal Fusion Network (PAMFN) that separately models modality-specific information and mixed-modality information.

Action Quality Assessment Decoder +1

Continual Action Assessment via Task-Consistent Score-Discriminative Feature Distribution Modeling

1 code implementation29 Sep 2023 Yuan-Ming Li, Ling-An Zeng, Jing-Ke Meng, Wei-Shi Zheng

Our idea for modeling Continual-AQA is to sequentially learn a task-consistent score-discriminative feature distribution, in which the latent features express a strong correlation with the score labels regardless of the task or action types. From this perspective, we aim to mitigate the forgetting in Continual-AQA from two aspects.

Action Assessment Action Quality Assessment +1

Likert Scoring With Grade Decoupling for Long-Term Action Assessment

no code implementations CVPR 2022 Angchi Xu, Ling-An Zeng, Wei-Shi Zheng

Long-term action quality assessment is a task of evaluating how well an action is performed, namely, estimating a quality score from a long video.

Action Assessment Action Quality Assessment +1

Hybrid Dynamic-static Context-aware Attention Network for Action Assessment in Long Videos

2 code implementations13 Aug 2020 Ling-An Zeng, Fa-Ting Hong, Wei-Shi Zheng, Qi-Zhi Yu, Wei Zeng, Yao-Wei Wang, Jian-Huang Lai

However, most existing works focus only on video dynamic information (i. e., motion information) but ignore the specific postures that an athlete is performing in a video, which is important for action assessment in long videos.

Action Assessment Action Quality Assessment

Cannot find the paper you are looking for? You can Submit a new open access paper.