Natural Language Understanding

666 papers with code • 6 benchmarks • 68 datasets

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Libraries

Use these libraries to find Natural Language Understanding models and implementations
11 papers
125,167
7 papers
2,202
6 papers
1,949
See all 10 libraries.

Most implemented papers

TinyBERT: Distilling BERT for Natural Language Understanding

huawei-noah/Pretrained-Language-Model Findings of the Association for Computational Linguistics 2020

To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.

ConvBERT: Improving BERT with Span-based Dynamic Convolution

yitu-opensource/ConvBert NeurIPS 2020

The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning.

GPT Understands, Too

THUDM/P-tuning 18 Mar 2021

Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).

Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

huggingface/transformers TACL 2020

Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.

SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization

namisan/mt-dnn ACL 2020

However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model.

A Comprehensive Survey on Graph Neural Networks

GustikS/NeuraLogic 3 Jan 2019

In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields.

CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text

facebookresearch/clutrr IJCNLP 2019

The recent success of natural language understanding (NLU) systems has been troubled by results highlighting the failure of these models to generalize in a systematic and robust way.

It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners

timoschick/pet NAACL 2021

When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance.

A Relational Tsetlin Machine with Applications to Natural Language Understanding

cair/TsetlinMachine 22 Feb 2021

TMs are a pattern recognition approach that uses finite state machines for learning and propositional logic to represent patterns.

Data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language

pytorch/fairseq Preprint 2022

While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind.