Natural Language Understanding

666 papers with code • 6 benchmarks • 68 datasets

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Libraries

Use these libraries to find Natural Language Understanding models and implementations
11 papers
125,334
7 papers
2,202
6 papers
1,952
See all 10 libraries.

Latest papers with no code

PURPLE: Making a Large Language Model a Better SQL Writer

no code yet • 29 Mar 2024

LLMs can learn to organize operator compositions from the input demonstrations for the given task.

Are LLMs Effective Backbones for Fine-tuning? An Experimental Investigation of Supervised LLMs on Chinese Short Text Matching

no code yet • 29 Mar 2024

The recent success of Large Language Models (LLMs) has garnered significant attention in both academia and industry.

Can Machine Translation Bridge Multilingual Pretraining and Cross-lingual Transfer Learning?

no code yet • 25 Mar 2024

We furthermore provide evidence through similarity measures and investigation of parameters that this lack of positive influence is due to output separability -- which we argue is of use for machine translation but detrimental elsewhere.

Engineering Safety Requirements for Autonomous Driving with Large Language Models

no code yet • 24 Mar 2024

Changes and updates in the requirement artifacts, which can be frequent in the automotive domain, are a challenge for SafetyOps.

VLUE: A New Benchmark and Multi-task Knowledge Transfer Learning for Vietnamese Natural Language Understanding

no code yet • 23 Mar 2024

The success of Natural Language Understanding (NLU) benchmarks in various languages, such as GLUE for English, CLUE for Chinese, KLUE for Korean, and IndoNLU for Indonesian, has facilitated the evaluation of new NLU models across a wide range of tasks.

MasonTigers at SemEval-2024 Task 9: Solving Puzzles with an Ensemble of Chain-of-Thoughts

no code yet • 22 Mar 2024

Our paper presents team MasonTigers submission to the SemEval-2024 Task 9 - which provides a dataset of puzzles for testing natural language understanding.

Towards Knowledge-Grounded Natural Language Understanding and Generation

no code yet • 22 Mar 2024

This thesis investigates how natural language understanding and generation with transformer models can benefit from grounding the models with knowledge representations and addresses the following key research questions: (i) Can knowledge of entities extend its benefits beyond entity-centric tasks, such as entity linking?

Do Not Worry if You Do Not Have Data: Building Pretrained Language Models Using Translationese

no code yet • 20 Mar 2024

In this paper, we explore the utility of Translationese as synthetic data created using machine translation for pre-training language models (LMs).

BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models

no code yet • 19 Mar 2024

Low-rank adaptation (LoRA) is a popular method for fine-tuning large-scale pre-trained models in downstream tasks by learning low-rank incremental matrices.

Energy-Based Models with Applications to Speech and Language Processing

no code yet • 16 Mar 2024

Therefore, the purpose of this monograph is to present a systematic introduction to energy-based models, including both algorithmic progress and applications in speech and language processing.