Search Results for author: Jiabei Zeng

Found 10 papers, 4 papers with code

Source-Free Adaptive Gaze Estimation by Uncertainty Reduction

1 code implementation CVPR 2023 Xin Cai, Jiabei Zeng, Shiguang Shan, Xilin Chen

In light of this, we present an unsupervised source-free domain adaptation approach for gaze estimation, which adapts a source-trained gaze estimator to unlabeled target domains without source data.

Gaze Estimation Source-Free Domain Adaptation

Cross-Encoder for Unsupervised Gaze Representation Learning

1 code implementation ICCV 2021 Yunjia Sun, Jiabei Zeng, Shiguang Shan, Xilin Chen

To address the issue that the feature of gaze is always intertwined with the appearance of the eye, Cross-Encoder disentangles the features using a latent-code-swapping mechanism on eye-consistent image pairs and gaze-similar ones.

Gaze Estimation Representation Learning

Emotion Recognition for In-the-wild Videos

no code implementations13 Feb 2020 Hanyu Liu, Jiabei Zeng, Shiguang Shan, Xilin Chen

This paper is a brief introduction to our submission to the seven basic expression classification track of Affective Behavior Analysis in-the-wild Competition held in conjunction with the IEEE International Conference on Automatic Face and Gesture Recognition (FG) 2020.

Emotion Recognition General Classification +1

$M^3$T: Multi-Modal Continuous Valence-Arousal Estimation in the Wild

1 code implementation7 Feb 2020 Yuan-Hang Zhang, Rulin Huang, Jiabei Zeng, Shiguang Shan, Xilin Chen

This report describes a multi-modal multi-task ($M^3$T) approach underlying our submission to the valence-arousal estimation track of the Affective Behavior Analysis in-the-wild (ABAW) Challenge, held in conjunction with the IEEE International Conference on Automatic Face and Gesture Recognition (FG) 2020.

Arousal Estimation Gesture Recognition

Facial Expression Recognition with Inconsistently Annotated Datasets

no code implementations ECCV 2018 Jiabei Zeng, Shiguang Shan, Xilin Chen

To address the inconsistency, we propose an Inconsistent Pseudo Annotations to Latent Truth(IPA2LT) framework to train a FER model from multiple inconsistently labeled datasets and large scale unlabeled data.

Facial Expression Recognition Facial Expression Recognition (FER)

Unsupervised Synchrony Discovery in Human Interaction

no code implementations ICCV 2015 Wen-Sheng Chu, Jiabei Zeng, Fernando de la Torre, Jeffrey F. Cohn, Daniel S. Messinger

We evaluate the effectiveness of our approach in multiple databases, including human actions using the CMU Mocap dataset, spontaneous facial behaviors using group-formation task dataset and parent-infant interaction dataset.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.