Misinformation

271 papers with code • 1 benchmarks • 38 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets

no code yet • 26 Mar 2024

In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.

TrustAI at SemEval-2024 Task 8: A Comprehensive Analysis of Multi-domain Machine Generated Text Detection Techniques

no code yet • 25 Mar 2024

In this paper, we present our methods for the SemEval2024 Task8, aiming to detect machine-generated text across various domains in both mono-lingual and multi-lingual contexts.

NUMTEMP: A real-world benchmark to verify claims with statistical and temporal expressions

no code yet • 25 Mar 2024

This addresses the challenge of verifying real-world numerical claims, which are complex and often lack precise information, not addressed by existing works that mainly focus on synthetic claims.

Evidence-Driven Retrieval Augmented Response Generation for Online Misinformation

no code yet • 22 Mar 2024

The proliferation of online misinformation has posed significant threats to public interest.

From Perils to Possibilities: Understanding how Human (and AI) Biases affect Online Fora

no code yet • 21 Mar 2024

On the other hand, we explore the emergence of online support groups through users' self-disclosure and social support mechanisms.

Threats, Attacks, and Defenses in Machine Unlearning: A Survey

no code yet • 20 Mar 2024

Machine Unlearning (MU) has gained considerable attention recently for its potential to achieve Safe AI by removing the influence of specific data from trained machine learning models.

Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts

no code yet • 20 Mar 2024

We examine whether our over-time intervention enhances the following of news media organization, the sharing and the liking of news content and the tweeting about politics and the liking of political content.

Correcting misinformation on social media with a large language model

no code yet • 17 Mar 2024

The results demonstrate MUSE's ability to correct misinformation promptly after appearing on social media; overall, MUSE outperforms GPT-4 by 37% and even high-quality corrections from laypeople by 29%.

FakeWatch: A Framework for Detecting Fake News to Ensure Credible Elections

no code yet • 14 Mar 2024

In today's technologically driven world, the rapid spread of fake news, particularly during critical events like elections, poses a growing threat to the integrity of information.

Knowledge Conflicts for LLMs: A Survey

no code yet • 13 Mar 2024

This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge.