Semantic Parsing
383 papers with code • 20 benchmarks • 42 datasets
Semantic Parsing is the task of transducing natural language utterances into formal meaning representations. The target meaning representations can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λ-calculus or the abstract meaning representations. Alternatively, for more task-driven approaches to Semantic Parsing, it is common for meaning representations to represent executable programs such as SQL queries, robotic commands, smart phone instructions, and even general-purpose programming languages like Python and Java.
Source: Tranx: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation
Libraries
Use these libraries to find Semantic Parsing models and implementationsDatasets
Subtasks
Latest papers
ReCoMIF: Reading comprehension based multi-source information fusion network for Chinese spoken language understanding
It usually includes slot filling and intent detection (SFID) tasks aiming at semantic parsing of utterances.
Holistic Exploration on Universal Decompositional Semantic Parsing: Architecture, Data Augmentation, and LLM Paradigm
In this paper, we conduct a holistic exploration of the Universal Decompositional Semantic (UDS) Parsing.
Optimal Transport Posterior Alignment for Cross-lingual Semantic Parsing
Cross-lingual semantic parsing transfers parsing capability from a high-resource language (e. g., English) to low-resource languages with scarce training data.
Incorporating Graph Information in Transformer-based AMR Parsing
Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text.
On Evaluating Multilingual Compositional Generalization with Translated Datasets
To address this limitation, we craft a faithful rule-based translation of the MCWQ dataset from English to Chinese and Japanese.
Discourse Representation Structure Parsing for Chinese
Previous work has predominantly focused on monolingual English semantic parsing.
T5-SR: A Unified Seq-to-Seq Decoding Strategy for Semantic Parsing
Translating natural language queries into SQLs in a seq2seq manner has attracted much attention recently.
XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations
However, existing CLSP models are separately proposed and evaluated on datasets of limited tasks and applications, impeding a comprehensive and unified evaluation of CLSP on a diverse range of NLs and MRs. To this end, we present XSemPLR, a unified benchmark for cross-lingual semantic parsing featured with 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains.
Recognize Anything: A Strong Image Tagging Model
We are releasing the RAM at \url{https://recognize-anything. github. io/} to foster the advancements of large models in computer vision.
Does Character-level Information Always Improve DRS-based Semantic Parsing?
In the experiments, we compare F1-scores by shuffling the order and randomizing character sequences after testing the performance of character-level information.