Misinformation

289 papers with code • 1 benchmarks • 38 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

Pitfalls of Conversational LLMs on News Debiasing

no code yet • 9 Apr 2024

This paper addresses debiasing in news editing and evaluates the effectiveness of conversational Large Language Models in this task.

Evaluation of an LLM in Identifying Logical Fallacies: A Call for Rigor When Adopting LLMs in HCI Research

no code yet • 8 Apr 2024

There is increasing interest in the adoption of LLMs in HCI research.

Can Language Models Recognize Convincing Arguments?

no code yet • 31 Mar 2024

The remarkable and ever-increasing capabilities of Large Language Models (LLMs) have raised concerns about their potential misuse for creating personalized, convincing misinformation and propaganda.

The Future of Combating Rumors? Retrieval, Discrimination, and Generation

no code yet • 29 Mar 2024

Artificial Intelligence Generated Content (AIGC) technology development has facilitated the creation of rumors with misinformation, impacting societal, economic, and political ecosystems, challenging democracy.

Improving Attributed Text Generation of Large Language Models via Preference Learning

no code yet • 27 Mar 2024

Large language models have been widely adopted in natural language processing, yet they face the challenge of generating unreliable content.

Fake or JPEG? Revealing Common Biases in Generated Image Detection Datasets

no code yet • 26 Mar 2024

In this paper, we emphasize that many datasets for AI-generated image detection contain biases related to JPEG compression and image size.

TrustAI at SemEval-2024 Task 8: A Comprehensive Analysis of Multi-domain Machine Generated Text Detection Techniques

no code yet • 25 Mar 2024

In this paper, we present our methods for the SemEval2024 Task8, aiming to detect machine-generated text across various domains in both mono-lingual and multi-lingual contexts.

Evidence-Driven Retrieval Augmented Response Generation for Online Misinformation

no code yet • 22 Mar 2024

The proliferation of online misinformation has posed significant threats to public interest.

From Perils to Possibilities: Understanding how Human (and AI) Biases affect Online Fora

no code yet • 21 Mar 2024

On the other hand, we explore the emergence of online support groups through users' self-disclosure and social support mechanisms.

Threats, Attacks, and Defenses in Machine Unlearning: A Survey

no code yet • 20 Mar 2024

Machine Unlearning (MU) has gained considerable attention recently for its potential to achieve Safe AI by removing the influence of specific data from trained machine learning models.