Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm

NLP tasks are often limited by scarcity of manually annotated data. In social media sentiment analysis and related tasks, researchers have therefore used binarized emoticons and specific hashtags as forms of distant supervision. Our paper shows that by extending the distant supervision to a more diverse set of noisy labels, the models can learn richer representations. Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis we obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model. Our analyses confirm that the diversity of our emotional labels yield a performance improvement over previous distant supervision approaches.

PDF Abstract EMNLP 2017 PDF EMNLP 2017 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentiment Analysis 1B Words Random 1 in 10 R@1 17 # 1
Transfer Learning Amazon Review Polarity Random Accuracy 12.8 # 1
Sentiment Analysis MR Millions of Emoji Training Time 1500 # 1

Methods


No methods listed for this paper. Add relevant methods here