Natural Language Inference

733 papers with code • 34 benchmarks • 77 datasets

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Libraries

Use these libraries to find Natural Language Inference models and implementations
14 papers
125,478
5 papers
2,203
4 papers
2,548
4 papers
229
See all 17 libraries.

Pixel Sentence Representation Learning

gowitheflow-1998/pixel-linguist 13 Feb 2024

To our knowledge, this is the first representation learning method devoid of traditional language models for understanding sentence and document semantics, marking a stride closer to human-like textual comprehension.

15
13 Feb 2024

Plausible Extractive Rationalization through Semi-Supervised Entailment Signal

wj210/NLI_ETP 13 Feb 2024

The increasing use of complex and opaque black box models requires the adoption of interpretable measures, one such option is extractive rationalizing models, which serve as a more interpretable alternative.

2
13 Feb 2024

A Hypothesis-Driven Framework for the Analysis of Self-Rationalising Models

marbr987/hypothesis_driven_analysis_of_self_rationalising_models 7 Feb 2024

The self-rationalising capabilities of LLMs are appealing because the generated explanations can give insights into the plausibility of the predictions.

0
07 Feb 2024

HQA-Attack: Toward High Quality Black-Box Hard-Label Adversarial Attack on Text

hqa-attack/hqaattack-demo NeurIPS 2023

Black-box hard-label adversarial attack on text is a practical and challenging task, as the text data space is inherently discrete and non-differentiable, and only the predicted label is accessible.

0
02 Feb 2024

Enhancing Ethical Explanations of Large Language Models through Iterative Symbolic Refinement

neuro-symbolic-ai/explanation_based_ethical_reasoning 1 Feb 2024

An increasing amount of research in Natural Language Inference (NLI) focuses on the application and evaluation of Large Language Models (LLMs) and their reasoning capabilities.

1
01 Feb 2024

MT-Ranker: Reference-free machine translation evaluation by inter-system ranking

ibraheem-moosa/mt-ranker 30 Jan 2024

Traditionally, Machine Translation (MT) Evaluation has been treated as a regression problem -- producing an absolute translation-quality score.

6
30 Jan 2024

InfoLossQA: Characterizing and Recovering Information Loss in Text Simplification

jantrienes/InfoLossQA 29 Jan 2024

Text simplification aims to make technical texts more accessible to laypeople but often results in deletion of information and vagueness.

5
29 Jan 2024

Textual Entailment for Effective Triple Validation in Object Prediction

expertailab/textual-entailment-for-effective-triple-validation-in-object-prediction 29 Jan 2024

Knowledge base population seeks to expand knowledge graphs with facts that are typically extracted from a text corpus.

1
29 Jan 2024

Seed-Guided Fine-Grained Entity Typing in Science and Engineering Domains

yuzhimanhua/setype 23 Jan 2024

In this paper, we study the task of seed-guided fine-grained entity typing in science and engineering domains, which takes the name and a few seed entities for each entity type as the only supervision and aims to classify new entity mentions into both seen and unseen types (i. e., those without seed entities).

6
23 Jan 2024

Are self-explanations from Large Language Models faithful?

AndreasMadsen/llm-introspection 15 Jan 2024

For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make its prediction without these words.

4
15 Jan 2024