Natural Language Inference

744 papers with code • 34 benchmarks • 77 datasets

Natural language inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches. Benchmark datasets used for NLI include SNLI, MultiNLI, SciTail, among others. You can get hands-on practice on the SNLI task by following this d2l.ai chapter.

Further readings:

Libraries

Use these libraries to find Natural Language Inference models and implementations
14 papers
126,436
5 papers
2,210
4 papers
2,553
4 papers
232
See all 17 libraries.

Most implemented papers

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization

kohpangwei/group_DRO 20 Nov 2019

Distributionally robust optimization (DRO) allows us to learn models that instead minimize the worst-case training loss over a set of pre-defined groups.

Reasoning about Entailment with Neural Attention

shyamupa/snli-entailment 22 Sep 2015

We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases.

Multi-Task Deep Neural Networks for Natural Language Understanding

namisan/mt-dnn ACL 2019

In this paper, we present a Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks.

TinyBERT: Distilling BERT for Natural Language Understanding

huawei-noah/Pretrained-Language-Model Findings of the Association for Computational Linguistics 2020

To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.

ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations

sinovation/ZEN Findings of the Association for Computational Linguistics 2020

Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data.

FlauBERT: Unsupervised Language Model Pre-training for French

getalp/Flaubert LREC 2020

Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.

mT5: A massively multilingual pre-trained text-to-text transformer

google-research/multilingual-t5 NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

SpanBERT: Improving Pre-training by Representing and Predicting Spans

facebookresearch/SpanBERT TACL 2020

We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text.

Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment

jind11/TextFooler 27 Jul 2019

Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models.

Supervised Multimodal Bitransformers for Classifying Images and Text

facebookresearch/mmbt 6 Sep 2019

Self-supervised bidirectional transformer models such as BERT have led to dramatic improvements in a wide variety of textual classification tasks.