Natural Language Inference

733 papers with code • 34 benchmarks • 77 datasets

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Libraries

Use these libraries to find Natural Language Inference models and implementations
14 papers
125,385
5 papers
2,202
4 papers
2,549
4 papers
229
See all 17 libraries.

Latest papers with no code

SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials

no code yet • 7 Apr 2024

Addressing this, we present SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for ClinicalTrials.

A Morphology-Based Investigation of Positional Encodings

no code yet • 6 Apr 2024

How does the importance of positional encoding in pre-trained language models (PLMs) vary across languages with different morphological complexity?

SEME at SemEval-2024 Task 2: Comparing Masked and Generative Language Models on Natural Language Inference for Clinical Trials

no code yet • 5 Apr 2024

This paper describes our submission to Task 2 of SemEval-2024: Safe Biomedical Natural Language Inference for Clinical Trials.

A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference

no code yet • 3 Apr 2024

Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI).

Ukrainian Texts Classification: Exploration of Cross-lingual Knowledge Transfer Approaches

no code yet • 2 Apr 2024

Despite the extensive amount of labeled datasets in the NLP text classification field, the persistent imbalance in data availability across various languages remains evident.

Evaluating Large Language Models Using Contrast Sets: An Experimental Approach

no code yet • 2 Apr 2024

The model achieved an accuracy of 89. 9% on the conventional SNLI dataset but showed a reduced accuracy of 72. 5% on our contrast set, indicating a substantial 17% decline.

Adverb Is the Key: Simple Text Data Augmentation with Adverb Deletion

no code yet • 29 Mar 2024

In the field of text data augmentation, rule-based methods are widely adopted for real-world applications owing to their cost-efficiency.

FACTOID: FACtual enTailment fOr hallucInation Detection

no code yet • 28 Mar 2024

We present FACTOID (FACTual enTAILment for hallucInation Detection), a benchmark dataset for FE.

Is Modularity Transferable? A Case Study through the Lens of Knowledge Distillation

no code yet • 27 Mar 2024

Moreover, we propose a method that allows the transfer of modules between incompatible PLMs without any change in the inference complexity.

Verbing Weirds Language (Models): Evaluation of English Zero-Derivation in Five LLMs

no code yet • 26 Mar 2024

We find that GPT-4 performs best on the task, followed by GPT-3. 5, but that the open source language models are also able to perform it and that the 7B parameter Mistral displays as little difference between its baseline performance on the natural language inference task and the non-prototypical syntactic category task, as the massive GPT-4.