Fact Verification
92 papers with code • 3 benchmarks • 14 datasets
Fact verification, also called "fact checking", is a process of verifying facts in natural text against a database of facts.
Most implemented papers
Multilingual Evidence Retrieval and Fact Verification to Combat Global Disinformation: The Power of Polyglotism
This article investigates multilingual evidence retrieval and fact verification as a step to combat global disinformation, a first effort of this kind, to the best of our knowledge.
FaVIQ: FAct Verification from Information-seeking Questions
Claims in FAVIQ are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge
We introduce CREAK, a testbed for commonsense reasoning about entity knowledge, bridging fact-checking about entities (Harry Potter is a wizard and is skilled at riding a broomstick) with commonsense inferences (if you're good at a skill you can teach others how to do it).
Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective
Most of the existing debiasing methods often identify and weaken these samples with biased features (i. e., superficial surface features that cause such spurious correlations).
Precise Zero-Shot Dense Retrieval without Relevance Labels
Given a query, HyDE first zero-shot instructs an instruction-following language model (e. g. InstructGPT) to generate a hypothetical document.
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Our framework trains a single arbitrary LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called reflection tokens.
DeFactoNLP: Fact Verification using Entity Recognition, TFIDF Vector Comparison and Decomposable Attention
In this paper, we describe DeFactoNLP, the system we designed for the FEVER 2018 Shared Task.
TabFact: A Large-scale Dataset for Table-based Fact Verification
To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.
Fine-grained Fact Verification with Kernel Graph Attention Network
Fact Verification requires fine-grained natural language inference capability that finds subtle clues to identify the syntactical and semantically correct but not well-supported claims.
Elastic weight consolidation for better bias inoculation
The biases present in training datasets have been shown to affect models for sentence pair classification tasks such as natural language inference (NLI) and fact verification.