no code implementations • 15 Mar 2024 • Liupei Lu, Yufeng Yin, Yuming Gu, Yizhen Wu, Pratusha Prasad, Yajie Zhao, Mohammad Soleymani
Then, we use MSDA to transfer the AU detection knowledge from a real dataset and the synthetic dataset to a target dataset.
no code implementations • 17 Jan 2024 • Yufeng Yin, Ishwarya Ananthabhotla, Vamsi Krishna Ithapu, Stavros Petridis, Yu-Hsiang Wu, Christi Miller
In this work, we build on this idea and introduce the problem of detecting hearing loss from an individual's facial expressions during a conversation.
no code implementations • 5 Sep 2023 • Minh Tran, Yufeng Yin, Mohammad Soleymani
There are individual differences in expressive behaviors driven by cultural norms and personality.
1 code implementation • 23 Aug 2023 • Yufeng Yin, Di Chang, Guoxian Song, Shen Sang, Tiancheng Zhi, Jing Liu, Linjie Luo, Mohammad Soleymani
The proposed FG-Net achieves a strong generalization ability for heatmap-based AU detection thanks to the generalizable and semantic-rich features extracted from the pre-trained generative model.
1 code implementation • 18 Aug 2023 • Di Chang, Yufeng Yin, Zongjian Li, Minh Tran, Mohammad Soleymani
Facial expression analysis is an important tool for human-computer interaction.
no code implementations • 19 Mar 2023 • Yufeng Yin, Minh Tran, Di Chang, Xinrui Wang, Mohammad Soleymani
Facial action unit detection has emerged as an important task within facial expression analysis, aimed at detecting specific pre-defined, objective facial expressions, such as lip tightening and cheek raising.