Misinformation
289 papers with code • 1 benchmarks • 38 datasets
Datasets
Latest papers with no code
Pitfalls of Conversational LLMs on News Debiasing
This paper addresses debiasing in news editing and evaluates the effectiveness of conversational Large Language Models in this task.
Evaluation of an LLM in Identifying Logical Fallacies: A Call for Rigor When Adopting LLMs in HCI Research
There is increasing interest in the adoption of LLMs in HCI research.
Can Language Models Recognize Convincing Arguments?
The remarkable and ever-increasing capabilities of Large Language Models (LLMs) have raised concerns about their potential misuse for creating personalized, convincing misinformation and propaganda.
The Future of Combating Rumors? Retrieval, Discrimination, and Generation
Artificial Intelligence Generated Content (AIGC) technology development has facilitated the creation of rumors with misinformation, impacting societal, economic, and political ecosystems, challenging democracy.
Improving Attributed Text Generation of Large Language Models via Preference Learning
Large language models have been widely adopted in natural language processing, yet they face the challenge of generating unreliable content.
Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets
In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.
TrustAI at SemEval-2024 Task 8: A Comprehensive Analysis of Multi-domain Machine Generated Text Detection Techniques
In this paper, we present our methods for the SemEval2024 Task8, aiming to detect machine-generated text across various domains in both mono-lingual and multi-lingual contexts.
Evidence-Driven Retrieval Augmented Response Generation for Online Misinformation
The proliferation of online misinformation has posed significant threats to public interest.
From Perils to Possibilities: Understanding how Human (and AI) Biases affect Online Fora
On the other hand, we explore the emergence of online support groups through users' self-disclosure and social support mechanisms.
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Machine Unlearning (MU) has gained considerable attention recently for its potential to achieve Safe AI by removing the influence of specific data from trained machine learning models.