Misinformation
271 papers with code • 1 benchmarks • 38 datasets
Datasets
Latest papers with no code
Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets
In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.
TrustAI at SemEval-2024 Task 8: A Comprehensive Analysis of Multi-domain Machine Generated Text Detection Techniques
In this paper, we present our methods for the SemEval2024 Task8, aiming to detect machine-generated text across various domains in both mono-lingual and multi-lingual contexts.
NUMTEMP: A real-world benchmark to verify claims with statistical and temporal expressions
This addresses the challenge of verifying real-world numerical claims, which are complex and often lack precise information, not addressed by existing works that mainly focus on synthetic claims.
Evidence-Driven Retrieval Augmented Response Generation for Online Misinformation
The proliferation of online misinformation has posed significant threats to public interest.
From Perils to Possibilities: Understanding how Human (and AI) Biases affect Online Fora
On the other hand, we explore the emergence of online support groups through users' self-disclosure and social support mechanisms.
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Machine Unlearning (MU) has gained considerable attention recently for its potential to achieve Safe AI by removing the influence of specific data from trained machine learning models.
Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts
We examine whether our over-time intervention enhances the following of news media organization, the sharing and the liking of news content and the tweeting about politics and the liking of political content.
Correcting misinformation on social media with a large language model
The results demonstrate MUSE's ability to correct misinformation promptly after appearing on social media; overall, MUSE outperforms GPT-4 by 37% and even high-quality corrections from laypeople by 29%.
FakeWatch: A Framework for Detecting Fake News to Ensure Credible Elections
In today's technologically driven world, the rapid spread of fake news, particularly during critical events like elections, poses a growing threat to the integrity of information.
Knowledge Conflicts for LLMs: A Survey
This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge.