Natural Language Inference
729 papers with code • 43 benchmarks • 77 datasets
Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".
Example:
Premise | Label | Hypothesis |
---|---|---|
A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. |
An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. |
A soccer game with multiple males playing. | entailment | Some men are playing a sport. |
Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.
Further readings:
Libraries
Use these libraries to find Natural Language Inference models and implementationsLatest papers with no code
How often are errors in natural language reasoning due to paraphrastic variability?
We propose a metric for evaluating the paraphrastic consistency of natural language reasoning models based on the probability of a model achieving the same correctness on two paraphrases of the same problem.
DKE-Research at SemEval-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness
Safe and reliable natural language inference is critical for extracting insights from clinical trial reports but poses challenges due to biases in large pre-trained language models.
MSciNLI: A Diverse Benchmark for Scientific Natural Language Inference
Furthermore, we show that domain shift degrades the performance of scientific NLI models which demonstrates the diverse characteristics of different domains in our dataset.
SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials
Addressing this, we present SemEval-2024 Task 2: Safe Biomedical Natural Language Inference for ClinicalTrials.
A Morphology-Based Investigation of Positional Encodings
How does the importance of positional encoding in pre-trained language models (PLMs) vary across languages with different morphological complexity?
SEME at SemEval-2024 Task 2: Comparing Masked and Generative Language Models on Natural Language Inference for Clinical Trials
This paper describes our submission to Task 2 of SemEval-2024: Safe Biomedical Natural Language Inference for Clinical Trials.
A Differentiable Integer Linear Programming Solver for Explanation-Based Natural Language Inference
Integer Linear Programming (ILP) has been proposed as a formalism for encoding precise structural and semantic constraints for Natural Language Inference (NLI).
Ukrainian Texts Classification: Exploration of Cross-lingual Knowledge Transfer Approaches
Despite the extensive amount of labeled datasets in the NLP text classification field, the persistent imbalance in data availability across various languages remains evident.
Evaluating Large Language Models Using Contrast Sets: An Experimental Approach
The model achieved an accuracy of 89. 9% on the conventional SNLI dataset but showed a reduced accuracy of 72. 5% on our contrast set, indicating a substantial 17% decline.
Adverb Is the Key: Simple Text Data Augmentation with Adverb Deletion
In the field of text data augmentation, rule-based methods are widely adopted for real-world applications owing to their cost-efficiency.