Sarcasm Detection

63 papers with code • 9 benchmarks • 14 datasets

The goal of Sarcasm Detection is to determine whether a sentence is sarcastic or non-sarcastic. Sarcasm is a type of phenomenon with specific perlocutionary effects on the hearer, such as to break their pattern of expectation. Consequently, correct understanding of sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and, frequently some real world facts.

Source: Attentional Multi-Reading Sarcasm Detection

Libraries

Use these libraries to find Sarcasm Detection models and implementations

Latest papers with no code

Generalizable Sarcasm Detection Is Just Around The Corner, Of Course!

no code yet • 9 Apr 2024

We tested the robustness of sarcasm detection models by examining their behavior when fine-tuned on four sarcasm datasets containing varying characteristics of sarcasm: label source (authors vs. third-party), domain (social media/online vs. offline conversations/dialogues), style (aggressive vs. humorous mocking).

On Prompt Sensitivity of ChatGPT in Affective Computing

no code yet • 20 Mar 2024

Recent studies have demonstrated the emerging capabilities of foundation models like ChatGPT in several fields, including affective computing.

Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding

no code yet • 17 Mar 2024

To address them, we propose Mixture-of-Prompt-Experts with Block-Aware Prompt Fusion (MoPE-BAF), a novel multi-modal soft prompt framework based on the unified vision-language model (VLM).

Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment

no code yet • 11 Mar 2024

Multi-modal semantic understanding requires integrating information from different modalities to extract users' real intention behind words.

MIKO: Multimodal Intention Knowledge Distillation from Large Language Models for Social-Media Commonsense Discovery

no code yet • 28 Feb 2024

However, understanding the intention behind social media posts remains challenging due to the implicitness of intentions in social media posts, the need for cross-modality understanding of both text and images, and the presence of noisy information such as hashtags, misspelled words, and complicated abbreviations.

InfFeed: Influence Functions as a Feedback to Improve the Performance of Subjective Tasks

no code yet • 22 Feb 2024

Second, in a dataset extension exercise, using influence functions to automatically identify data points that have been initially `silver' annotated by some existing method and need to be cross-checked (and corrected) by annotators to improve the model performance.

Systematic Literature Review: Computational Approaches for Humour Style Classification

no code yet • 30 Jan 2024

Furthermore, the SLR identifies a range of features and computational models that can seamlessly transition from related tasks like binary humour and sarcasm detection to invigorate humour style classification.

Debiasing Multimodal Sarcasm Detection with Contrastive Learning

no code yet • 16 Dec 2023

Moreover, we propose a novel debiasing multimodal sarcasm detection framework with contrastive learning, which aims to mitigate the harmful effect of biased textual factors for robust OOD generalization.

On Sarcasm Detection with OpenAI GPT-based Models

no code yet • 7 Dec 2023

In the zero-shot case, one of GPT-4 models yields an accuracy of 0. 70 and $F_1$-score of 0. 75.

Retrofitting Light-weight Language Models for Emotions using Supervised Contrastive Learning

no code yet • 29 Oct 2023

We present a novel retrofitting method to induce emotion aspects into pre-trained language models (PLMs) such as BERT and RoBERTa.