Natural Language Inference
729 papers with code • 43 benchmarks • 77 datasets
Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".
Example:
Premise | Label | Hypothesis |
---|---|---|
A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. |
An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. |
A soccer game with multiple males playing. | entailment | Some men are playing a sport. |
Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.
Further readings:
Libraries
Use these libraries to find Natural Language Inference models and implementationsLatest papers with no code
FACTOID: FACtual enTailment fOr hallucInation Detection
We present FACTOID (FACTual enTAILment for hallucInation Detection), a benchmark dataset for FE.
Is Modularity Transferable? A Case Study through the Lens of Knowledge Distillation
Moreover, we propose a method that allows the transfer of modules between incompatible PLMs without any change in the inference complexity.
Verbing Weirds Language (Models): Evaluation of English Zero-Derivation in Five LLMs
We find that GPT-4 performs best on the task, followed by GPT-3. 5, but that the open source language models are also able to perform it and that the 7B parameter Mistral displays as little difference between its baseline performance on the natural language inference task and the non-prototypical syntactic category task, as the massive GPT-4.
Multilingual Sentence-T5: Scalable Sentence Encoders for Multilingual Applications
Prior work on multilingual sentence embedding has demonstrated that the efficient use of natural language inference (NLI) data to build high-performance models can outperform conventional methods.
Ontology Completion with Natural Language Inference and Concept Embeddings: An Analysis
One line of work treats this task as a Natural Language Inference (NLI) problem, thus relying on the knowledge captured by language models to identify the missing knowledge.
Dermacen Analytica: A Novel Methodology Integrating Multi-Modal Large Language Models with Machine Learning in tele-dermatology
The workflow integrates large language, transformer-based vision models and sophisticated machine learning tools.
Cross-Lingual Transfer for Natural Language Inference via Multilingual Prompt Translator
To efficiently transfer soft prompt, we propose a novel framework, Multilingual Prompt Translator (MPT), where a multilingual prompt translator is introduced to properly process crucial knowledge embedded in prompt by changing language knowledge while retaining task knowledge.
Exploring Tokenization Strategies and Vocabulary Sizes for Enhanced Arabic Language Models
This paper presents a comprehensive examination of the impact of tokenization strategies and vocabulary sizes on the performance of Arabic language models in downstream natural language processing tasks.
SIFiD: Reassess Summary Factual Inconsistency Detection with LLM
Ensuring factual consistency between the summary and the original document is paramount in summarization tasks.
Cross-lingual Transfer or Machine Translation? On Data Augmentation for Monolingual Semantic Textual Similarity
Rather, we find a superiority of the Wikipedia domain over the NLI domain for these languages, in contrast to prior studies that focused on NLI as training data.