Search Results for author: Faliang Chang

Found 3 papers, 0 papers with code

AE-Net:Adjoint Enhancement Network for Efficient Action Recognition in Video Understanding

no code implementations TMM 2022 Bin Wang, Chunsheng Liu, Faliang Chang, Wenqian Wang and Nanjun Li

Action recognition in video understanding is a challenging task, largely because of the complexity and difficulty in temporal modeling, making it suffer from motion information loss and misalignment of temporal attention in spatial dimensions.

Action Recognition Video Understanding

Guidance Module Network for Video Captioning

no code implementations20 Dec 2020 Xiao Zhang, Chunsheng Liu, Faliang Chang

In this paper, we present a novel architecture which introduces a guidance module to encourage the encoder-decoder model to generate words related to the past and future words in a caption.

Sentence Video Captioning

Attention to Head Locations for Crowd Counting

no code implementations27 Jun 2018 Youmei Zhang, Chunluan Zhou, Faliang Chang, Alex C. Kot

Occlusions, complex backgrounds, scale variations and non-uniform distributions present great challenges for crowd counting in practical applications.

Crowd Counting Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.