Improving Multi-label Emotion Classification by Integrating both General and Domain-specific Knowledge

WS 2019  ·  Wenhao Ying, Rong Xiang, Qin Lu ·

Deep learning based general language models have achieved state-of-the-art results in many popular tasks such as sentiment analysis and QA tasks. Text in domains like social media has its own salient characteristics. Domain knowledge should be helpful in domain relevant tasks. In this work, we devise a simple method to obtain domain knowledge and further propose a method to integrate domain knowledge with general knowledge based on deep language models to improve performance of emotion classification. Experiments on Twitter data show that even though a deep language model fine-tuned by a target domain data has attained comparable results to that of previous state-of-the-art models, this fine-tuned model can still benefit from our extracted domain knowledge to obtain more improvement. This highlights the importance of making use of domain knowledge in domain-specific applications.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Emotion Classification SemEval 2018 Task 1E-c BERT+DK Macro-F1 0.549 # 4
Micro-F1 0.713 # 1
Accuracy 0.591 # 2

Methods


No methods listed for this paper. Add relevant methods here