Browse SoTA > Natural Language Processing > Natural Language Inference

Natural Language Inference

268 papers with code · Natural Language Processing

Natural language inference is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Greatest papers with code

FlauBERT: Unsupervised Language Model Pre-training for French

LREC 2020 huggingface/transformers

Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.

LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE TEXT CLASSIFICATION WORD SENSE DISAMBIGUATION

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

ACL 2020 huggingface/transformers

We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.

Ranked #10 on Question Answering on SQuAD1.1 dev (F1 metric)

DENOISING MACHINE TRANSLATION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING TEXT GENERATION

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

NeurIPS 2019 huggingface/transformers

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

LANGUAGE MODELLING LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TRANSFER LEARNING

RoBERTa: A Robustly Optimized BERT Pretraining Approach

26 Jul 2019huggingface/transformers

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

Ranked #2 on Natural Language Inference on ANLI test (using extra training data)

LANGUAGE MODELLING LEXICAL SIMPLIFICATION LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS

XLNet: Generalized Autoregressive Pretraining for Language Understanding

NeurIPS 2019 huggingface/transformers

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Improving Language Understanding by Generative Pre-Training

Preprint 2018 huggingface/transformers

We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task.

DOCUMENT CLASSIFICATION LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING QUESTION ANSWERING SEMANTIC SIMILARITY SEMANTIC TEXTUAL SIMILARITY