SAMM Long Videos: A Spontaneous Facial Micro- and Macro-Expressions Dataset

4 Nov 2019  ·  Chuin Hong Yap, Connah Kendrick, Moi Hoon Yap ·

With the growth of popularity of facial micro-expressions in recent years, the demand for long videos with micro- and macro-expressions remains high. Extended from SAMM, a micro-expressions dataset released in 2016, this paper presents SAMM Long Videos dataset for spontaneous micro- and macro-expressions recognition and spotting. SAMM Long Videos dataset consists of 147 long videos with 343 macro-expressions and 159 micro-expressions. The dataset is FACS-coded with detailed Action Units (AUs). We compare our dataset with Chinese Academy of Sciences Macro-Expressions and Micro-Expressions (CAS(ME)2) dataset, which is the only available fully annotated dataset with micro- and macro-expressions. Furthermore, we preprocess the long videos using OpenFace, which includes face alignment and detection of facial AUs. We conduct facial expression spotting using this dataset and compare it with the baseline of MEGC III. Our spotting method outperformed the baseline result with F1-score of 0.3299.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Introduced in the Paper:

SAMM Long Videos

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here