Natural Language Inference

729 papers with code • 43 benchmarks • 77 datasets

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Libraries

Use these libraries to find Natural Language Inference models and implementations
14 papers
124,889
5 papers
2,199
4 papers
2,548
4 papers
228
See all 17 libraries.

Latest papers with no code

How often are errors in natural language reasoning due to paraphrastic variability?

no code yet • 17 Apr 2024

We propose a metric for evaluating the paraphrastic consistency of natural language reasoning models based on the probability of a model achieving the same correctness on two paraphrases of the same problem.

DKE-Research at SemEval-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness

no code yet • 14 Apr 2024

Safe and reliable natural language inference is critical for extracting insights from clinical trial reports but poses challenges due to biases in large pre-trained language models.

MSciNLI: A Diverse Benchmark for Scientific Natural Language Inference

no code yet • 11 Apr 2024

Furthermore, we show that domain shift degrades the performance of scientific NLI models which demonstrates the diverse characteristics of different domains in our dataset.

SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials

no code yet • 7 Apr 2024

Addressing this, we present SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for ClinicalTrials.

A Morphology-Based Investigation of Positional Encodings

no code yet • 6 Apr 2024

How does the importance of positional encoding in pre-trained language models (PLMs) vary across languages with different morphological complexity?

SEME at SemEval-2024 Task 2: Comparing Masked and Generative Language Models on Natural Language Inference for Clinical Trials

no code yet • 5 Apr 2024

This paper describes our submission to Task 2 of SemEval-2024: Safe Biomedical Natural Language Inference for Clinical Trials.

A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference

no code yet • 3 Apr 2024

Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI).

Ukrainian Texts Classification: Exploration of Cross-lingual Knowledge Transfer Approaches

no code yet • 2 Apr 2024

Despite the extensive amount of labeled datasets in the NLP text classification field, the persistent imbalance in data availability across various languages remains evident.

Evaluating Large Language Models Using Contrast Sets: An Experimental Approach

no code yet • 2 Apr 2024

The model achieved an accuracy of 89. 9% on the conventional SNLI dataset but showed a reduced accuracy of 72. 5% on our contrast set, indicating a substantial 17% decline.

Adverb Is the Key: Simple Text Data Augmentation with Adverb Deletion

no code yet • 29 Mar 2024

In the field of text data augmentation, rule-based methods are widely adopted for real-world applications owing to their cost-efficiency.