Natural Language Inference
733 papers with code • 34 benchmarks • 77 datasets
Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".
Example:
Premise | Label | Hypothesis |
---|---|---|
A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. |
An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. |
A soccer game with multiple males playing. | entailment | Some men are playing a sport. |
Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.
Further readings:
Libraries
Use these libraries to find Natural Language Inference models and implementationsLatest papers
Pixel Sentence Representation Learning
To our knowledge, this is the first representation learning method devoid of traditional language models for understanding sentence and document semantics, marking a stride closer to human-like textual comprehension.
Plausible Extractive Rationalization through Semi-Supervised Entailment Signal
The increasing use of complex and opaque black box models requires the adoption of interpretable measures, one such option is extractive rationalizing models, which serve as a more interpretable alternative.
A Hypothesis-Driven Framework for the Analysis of Self-Rationalising Models
The self-rationalising capabilities of LLMs are appealing because the generated explanations can give insights into the plausibility of the predictions.
HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text
Black-box hard-label adversarial attack on text is a practical and challenging task, as the text data space is inherently discrete and non-differentiable, and only the predicted label is accessible.
Enhancing Ethical Explanations of Large Language Models through Iterative Symbolic Refinement
An increasing amount of research in Natural Language Inference (NLI) focuses on the application and evaluation of Large Language Models (LLMs) and their reasoning capabilities.
MT-Ranker: Reference-free machine translation evaluation by inter-system ranking
Traditionally, Machine Translation (MT) Evaluation has been treated as a regression problem -- producing an absolute translation-quality score.
InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification
Text simplification aims to make technical texts more accessible to laypeople but often results in deletion of information and vagueness.
Textual Entailment for Effective Triple Validation in Object Prediction
Knowledge base population seeks to expand knowledge graphs with facts that are typically extracted from a text corpus.
Seed-Guided Fine-Grained Entity Typing in Science and Engineering Domains
In this paper, we study the task of seed-guided fine-grained entity typing in science and engineering domains, which takes the name and a few seed entities for each entity type as the only supervision and aims to classify new entity mentions into both seen and unseen types (i. e., those without seed entities).
Are self-explanations from Large Language Models faithful?
For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make its prediction without these words.