Domain adversarial learning for emotion recognition

24 Oct 2019  ·  Zheng Lian, Jian-Hua Tao, Bin Liu, Jian Huang ·

In practical applications for emotion recognition, users do not always exist in the training corpus. The mismatch between training speakers and testing speakers affects the performance of the trained model. To deal with this problem, we need our model to focus on emotion-related information, while ignoring the difference between speaker identities. In this paper, we look into the use of the domain adversarial neural network (DANN) to extract a common representation between different speakers. The primary task is to predict emotion labels. The secondary task is to learn a common representation where speaker identities can not be distinguished. By using the gradient reversal layer, the gradients coming from the secondary task are used to bring the representations for different speakers closer. To verify the effectiveness of the proposed method, we conduct experiments on the IEMOCAP database. Experimental results demonstrate that the proposed framework shows an absolute improvement of 3.48% over state-of-the-art strategies.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here