Natural Language Understanding
664 papers with code • 11 benchmarks • 71 datasets
Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.
Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?
Libraries
Use these libraries to find Natural Language Understanding models and implementationsLatest papers
TabSQLify: Enhancing Reasoning Capabilities of LLMs Through Table Decomposition
Table reasoning is a challenging task that requires understanding both natural language questions and structured tabular data.
ANCHOR: LLM-driven News Subject Conditioning for Text-to-Image Synthesis
With Large Language Models (LLM) achieving success in language and commonsense reasoning tasks, we explore the ability of different LLMs to identify and understand key subjects from abstractive captions.
MING-MOE: Enhancing Medical Multi-Task Learning in Large Language Models with Sparse Mixture of Low-Rank Adapter Experts
Large language models like ChatGPT have shown substantial progress in natural language understanding and generation, proving valuable across various disciplines, including the medical field.
On Training Data Influence of GPT Models
This paper presents GPTfluence, a novel approach that leverages a featurized simulation to assess the impact of training examples on the training dynamics of GPT models.
Data-Augmentation-Based Dialectal Adaptation for LLMs
We propose an approach that combines the strengths of different types of language models and leverages data augmentation techniques to improve task performance on three South Slavic dialects: Chakavian, Cherkano, and Torlak.
XNLIeu: a dataset for cross-lingual NLI in Basque
We have conducted a series of experiments using mono- and multilingual LLMs to assess a) the effect of professional post-edition on the MT system; b) the best cross-lingual strategy for NLI in Basque; and c) whether the choice of the best cross-lingual strategy is influenced by the fact that the dataset is built by translation.
Chinese Sequence Labeling with Semi-Supervised Boundary-Aware Language Model Pre-training
Experimental results on Chinese sequence labeling datasets demonstrate that the improved BABERT variant outperforms the vanilla version, not only on these tasks but also more broadly across a range of Chinese natural language understanding tasks.
Intent Detection and Entity Extraction from BioMedical Literature
Biomedical queries have become increasingly prevalent in web searches, reflecting the growing interest in accessing biomedical literature.
Conversational Disease Diagnosis via External Planner-Controlled Large Language Models
The advancement of medical artificial intelligence (AI) has set the stage for the realization of conversational diagnosis, where AI systems mimic human doctors by engaging in dialogue with patients to deduce diagnoses.
EXPLORER: Exploration-guided Reasoning for Textual Reinforcement Learning
To tackle these issues, in this paper, we present EXPLORER which is an exploration-guided reasoning agent for textual reinforcement learning.