Natural Language Understanding

681 papers with code • 6 benchmarks • 69 datasets

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Libraries

Use these libraries to find Natural Language Understanding models and implementations
11 papers
127,058
7 papers
2,210
6 papers
1,969
See all 10 libraries.

Most implemented papers

Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding

namisan/mt-dnn 20 Apr 2019

This paper explores the use of knowledge distillation to improve a Multi-Task Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text representations across multiple natural language understanding tasks.

A Hybrid Neural Network Model for Commonsense Reasoning

namisan/mt-dnn WS 2019

An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers.

Ludwig: a type-based declarative deep learning toolbox

uber/ludwig 17 Sep 2019

In this work we present Ludwig, a flexible, extensible and easy to use toolbox which allows users to train deep learning models and use them for obtaining predictions without writing code.

Incorporating BERT into Neural Machine Translation

bert-nmt/bert-nmt ICLR 2020

While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning.

The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding

namisan/mt-dnn ACL 2020

We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.

UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training

microsoft/unilm 28 Feb 2020

We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM).

Sum-product networks: A survey

SPFlow/SPFlow 2 Apr 2020

This paper offers a survey of SPNs, including their definition, the main algorithms for inference and learning from data, the main applications, a brief review of software libraries, and a comparison with related models

KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding

kakaobrain/KorNLUDatasets Findings of the Association for Computational Linguistics 2020

Although several benchmark datasets for those tasks have been released in English and a few other languages, there are no publicly available NLI or STS datasets in the Korean language.

CLUE: A Chinese Language Understanding Evaluation Benchmark

CLUEbenchmark/CLUE COLING 2020

The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.

Adversarial Training for Large Neural Language Models

namisan/mt-dnn 20 Apr 2020

In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning.