A novel facial emotion recognition model using segmentation VGG-19 architecture

Facial Emotion Recognition (FER) has gained popularity in recent years due to its many applications, including biometrics, detection of mental illness, understanding of human behavior, and psychological profiling. However, developing an accurate and robust FER pipeline is still challenging because multiple factors make it difficult to generalize across different emotions. The factors that challenge a promising FER pipeline include pose variation, heterogeneity of the facial structure, illumination, occlusion, low resolution, and aging factors. Many approaches were developed to overcome the above problems, such as the Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) histogram. However, these methods require manual feature selection. Convolutional Neural Networks (CNN) overcame this manual feature selection problem. CNN has shown great potential in FER tasks due to its unique feature extraction strategy compared to regular FER models. In this paper, we propose a novel CNN architecture by interfacing U-Net segmentation layers in-between Visual Geometry Group (VGG) layers to allow the network to emphasize more critical features from the feature map, which also controls the flow of redundant information through the VGG layers. Our model achieves state-of-the-art (SOTA) single network accuracy compared with other well-known FER models on the FER-2013 dataset.

PDF

Datasets


Results from the Paper


Ranked #3 on Facial Expression Recognition (FER) on FER2013 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Facial Expression Recognition (FER) FER2013 Segmentation VGG-19 Accuracy 75.97 # 3

Methods