Natural Language Understanding
664 papers with code • 11 benchmarks • 71 datasets
Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.
Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?
Libraries
Use these libraries to find Natural Language Understanding models and implementationsLatest papers with no code
Automating REST API Postman Test Cases Using LLM
Postman test cases offer streamlined automation, collaboration, and dynamic data handling, providing a user-friendly and efficient approach to API testing compared to traditional test cases.
Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors
Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space.
Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain
While these LLMs display competitive performance on automated medical texts benchmarks, they have been pre-trained and evaluated with a focus on a single language (English mostly).
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements
In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge.
RecGPT: Generative Personalized Prompts for Sequential Recommendation via ChatGPT Training Paradigm
For the model part, we adopt Generative Pre-training Transformer (GPT) as the sequential recommendation model and design a user modular to capture personalized information.
Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers
The integration of Large Language Models (LLMs) in information retrieval has raised a critical reevaluation of fairness in the text-ranking models.
PURPLE: Making a Large Language Model a Better SQL Writer
LLMs can learn to organize operator compositions from the input demonstrations for the given task.
Are LLMs Effective Backbones for Fine-tuning? An Experimental Investigation of Supervised LLMs on Chinese Short Text Matching
The recent success of Large Language Models (LLMs) has garnered significant attention in both academia and industry.
Can Machine Translation Bridge Multilingual Pretraining and Cross-lingual Transfer Learning?
We furthermore provide evidence through similarity measures and investigation of parameters that this lack of positive influence is due to output separability -- which we argue is of use for machine translation but detrimental elsewhere.
Engineering Safety Requirements for Autonomous Driving with Large Language Models
Changes and updates in the requirement artifacts, which can be frequent in the automotive domain, are a challenge for SafetyOps.