Sarcasm Detection
63 papers with code • 9 benchmarks • 14 datasets
The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.
Libraries
Use these libraries to find Sarcasm Detection models and implementationsDatasets
Latest papers with no code
Generalizable Sarcasm Detection Is Just Around The Corner, Of Course!
We tested the robustness of sarcasm detection models by examining their behavior when fine-tuned on four sarcasm datasets containing varying characteristics of sarcasm: label source (authors vs. third-party), domain (social media/online vs. offline conversations/dialogues), style (aggressive vs. humorous mocking).
On Prompt Sensitivity of ChatGPT in Affective Computing
Recent studies have demonstrated the emerging capabilities of foundation models like ChatGPT in several fields, including affective computing.
Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding
To address them, we propose Mixture-of-Prompt-Experts with Block-Aware Prompt Fusion (MoPE-BAF), a novel multi-modal soft prompt framework based on the unified vision-language model (VLM).
Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment
Multi-modal semantic understanding requires integrating information from different modalities to extract users' real intention behind words.
MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery
However, understanding the intention behind social media posts remains challenging due to the implicitness of intentions in social media posts, the need for cross-modality understanding of both text and images, and the presence of noisy information such as hashtags, misspelled words, and complicated abbreviations.
InfFeed: Influence Functions as a Feedback to Improve the Performance of Subjective Tasks
Second, in a dataset extension exercise, using influence functions to automatically identify data points that have been initially `silver' annotated by some existing method and need to be cross-checked (and corrected) by annotators to improve the model performance.
Systematic Literature Review: Computational Approaches for Humour Style Classification
Furthermore, the SLR identifies a range of features and computational models that can seamlessly transition from related tasks like binary humour and sarcasm detection to invigorate humour style classification.
Debiasing Multimodal Sarcasm Detection with Contrastive Learning
Moreover, we propose a novel debiasing multimodal sarcasm detection framework with contrastive learning, which aims to mitigate the harmful effect of biased textual factors for robust OOD generalization.
On Sarcasm Detection with OpenAI GPT-based Models
In the zero-shot case, one of GPT-4 models yields an accuracy of 0. 70 and $F_1$-score of 0. 75.
Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning
We present a novel retrofitting method to induce emotion aspects into pre-trained language models (PLMs) such as BERT and RoBERTa.