Natural Language Understanding

664 papers with code • 11 benchmarks • 71 datasets

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Libraries

Use these libraries to find Natural Language Understanding models and implementations
11 papers
124,593
7 papers
2,198
6 papers
1,941
4 papers
153
See all 10 libraries.

Latest papers with no code

Automating REST API Postman Test Cases Using LLM

no code yet • 16 Apr 2024

Postman test cases offer streamlined automation, collaboration, and dynamic data handling, providing a user-friendly and efficient approach to API testing compared to traditional test cases.

Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors

no code yet • 16 Apr 2024

Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space.

Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain

no code yet • 11 Apr 2024

While these LLMs display competitive performance on automated medical texts benchmarks, they have been pre-trained and evaluated with a focus on a single language (English mostly).

LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements

no code yet • 9 Apr 2024

In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge.

RecGPT: Generative Personalized Prompts for Sequential Recommendation via ChatGPT Training Paradigm

no code yet • 6 Apr 2024

For the model part, we adopt Generative Pre-training Transformer (GPT) as the sequential recommendation model and design a user modular to capture personalized information.

Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers

no code yet • 4 Apr 2024

The integration of Large Language Models (LLMs) in information retrieval has raised a critical reevaluation of fairness in the text-ranking models.

PURPLE: Making a Large Language Model a Better SQL Writer

no code yet • 29 Mar 2024

LLMs can learn to organize operator compositions from the input demonstrations for the given task.

Are LLMs Effective Backbones for Fine-tuning? An Experimental Investigation of Supervised LLMs on Chinese Short Text Matching

no code yet • 29 Mar 2024

The recent success of Large Language Models (LLMs) has garnered significant attention in both academia and industry.

Can Machine Translation Bridge Multilingual Pretraining and Cross-lingual Transfer Learning?

no code yet • 25 Mar 2024

We furthermore provide evidence through similarity measures and investigation of parameters that this lack of positive influence is due to output separability -- which we argue is of use for machine translation but detrimental elsewhere.

Engineering Safety Requirements for Autonomous Driving with Large Language Models

no code yet • 24 Mar 2024

Changes and updates in the requirement artifacts, which can be frequent in the automotive domain, are a challenge for SafetyOps.