Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media

WS 2020 Xiangjue DongChangmao LiJinho D. Choi

We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread... (read more)

PDF Abstract WS 2020 PDF WS 2020 Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper