Browse SoTA > Natural Language Processing > Natural Language Inference

Natural Language Inference

205 papers with code ยท Natural Language Processing

Natural language inference is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise".

Example:

Premise Label Hypothesis
A man inspects the uniform of a figure in some East Asian country. contradiction The man is sleeping.
An older and younger man smiling. neutral Two men are smiling and laughing at the cats playing on the floor.
A soccer game with multiple males playing. entailment Some men are playing a sport.

Benchmarks

Latest papers without code

On Learning Universal Representations Across Languages

31 Jul 2020

Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.

CONTRASTIVE LEARNING CROSS-LINGUAL NATURAL LANGUAGE INFERENCE LANGUAGE MODELLING MACHINE TRANSLATION

Mono vs Multilingual Transformer-based Models: a Comparison across Several Language Tasks

19 Jul 2020

BERT (Bidirectional Encoder Representations from Transformers) and ALBERT (A Lite BERT) are methods for pre-training language models which can later be fine-tuned for a variety of Natural Language Understanding tasks.

FAKE NEWS DETECTION LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS

An Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models

14 Jul 2020

Recent work has shown that pre-trained language models such as BERT improve robustness to spurious correlations in the dataset.

MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE PARAPHRASE IDENTIFICATION

Logic, Language, and Calculus

6 Jul 2020

The difference between object-language and metalanguage is crucial for logical analysis, but has yet not been examined for the field of computer science.

NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING

KLEJ: Comprehensive Benchmark for Polish Language Understanding

ACL 2020

To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications.

NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING QUESTION ANSWERING SENTIMENT ANALYSIS

Improving Truthfulness of Headline Generation

ACL 2020

Building a binary classifier that predicts an entailment relation between an article and its headline, we filter out untruthful instances from the supervision data.

ABSTRACTIVE TEXT SUMMARIZATION NATURAL LANGUAGE INFERENCE

On Faithfulness and Factuality in Abstractive Summarization

ACL 2020

It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation.

ABSTRACTIVE TEXT SUMMARIZATION DOCUMENT SUMMARIZATION LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE TEXT GENERATION

Probing Linguistic Systematicity

ACL 2020

Recently, there has been much interest in the question of whether deep natural language understanding (NLU) models exhibit systematicity, generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear.

NATURAL LANGUAGE INFERENCE NATURAL LANGUAGE UNDERSTANDING

MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices

ACL 2020

On the natural language inference tasks of GLUE, MobileBERT achieves a GLUE score of 77. 7 (0. 6 lower than BERT{\_}BASE), and 62 ms latency on a Pixel 4 phone.

NATURAL LANGUAGE INFERENCE QUESTION ANSWERING TRANSFER LEARNING