Natural Language Understanding
666 papers with code • 6 benchmarks • 68 datasets
Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.
Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?
Libraries
Use these libraries to find Natural Language Understanding models and implementationsMost implemented papers
TinyBERT: Distilling BERT for Natural Language Understanding
To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.
ConvBERT: Improving BERT with Span-based Dynamic Convolution
The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning.
GPT Understands, Too
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU).
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model.
A Comprehensive Survey on Graph Neural Networks
In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields.
CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text
The recent success of natural language understanding (NLU) systems has been troubled by results highlighting the failure of these models to generalize in a systematic and robust way.
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance.
A Relational Tsetlin Machine with Applications to Natural Language Understanding
TMs are a pattern recognition approach that uses finite state machines for learning and propositional logic to represent patterns.
Data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind.