Misinformation
281 papers with code • 1 benchmarks • 38 datasets
Datasets
Latest papers
Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images
Creating high-quality and realistic images is now possible thanks to the impressive advancements in image generation.
ConspEmoLLM: Conspiracy Theory Detection Using an Emotion-Based Large Language Model
Driven by a comprehensive analysis of conspiracy text that reveals its distinctive affective features, we propose ConspEmoLLM, the first open-source LLM that integrates affective information and is able to perform diverse tasks relating to conspiracy theories.
Cross-Lingual Learning vs. Low-Resource Fine-Tuning: A Case Study with Fact-Checking in Turkish
While misinformation is prevalent in other languages, the majority of research in this field has concentrated on the English language.
Challenges in Pre-Training Graph Neural Networks for Context-Based Fake News Detection: An Evaluation of Current Strategies and Resource Limitations
Pre-training of neural networks has recently revolutionized the field of Natural Language Processing (NLP) and has before demonstrated its effectiveness in computer vision.
Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models
Large language models generate high-quality responses with potential misinformation, underscoring the need for regulation by distinguishing AI-generated and human-written texts.
Towards Fair Graph Anomaly Detection: Problem, New Datasets, and Evaluation
The Fair Graph Anomaly Detection (FairGAD) problem aims to accurately detect anomalous nodes in an input graph while ensuring fairness and avoiding biased predictions against individuals from sensitive subgroups such as gender or political leanings.
Backdoor Attacks on Dense Passage Retrievers for Disseminating Misinformation
To achieve this, we propose a perilous backdoor attack triggered by grammar errors in dense passage retrieval.
The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative
Unlike direct harmful output generation for MLLMs, our research demonstrates how a single MLLM agent can be subtly influenced to generate prompts that, in turn, induce other MLLM agents in the society to output malicious content.
What Evidence Do Language Models Find Convincing?
Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as "is aspartame linked to cancer".
Machine-generated Text Localization
Machine-Generated Text (MGT) detection aims to identify a piece of text as machine or human written.