Semantic Parsing

383 papers with code • 20 benchmarks • 42 datasets

Semantic Parsing is the task of transducing natural language utterances into formal meaning representations. The target meaning representations can be defined according to a wide variety of formalisms. This include linguistically-motivated semantic representations that are designed to capture the meaning of any sentence such as λ-calculus or the abstract meaning representations. Alternatively, for more task-driven approaches to Semantic Parsing, it is common for meaning representations to represent executable programs such as SQL queries, robotic commands, smart phone instructions, and even general-purpose programming languages like Python and Java.

Source: Tranx: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation

Libraries

Use these libraries to find Semantic Parsing models and implementations

Most implemented papers

NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System

TellinaTool/nl2bash LREC 2018

We present new data and semantic parsing methods for the problem of mapping English sentences to Bash commands (NL2Bash).

DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs

allenai/allennlp-reading-comprehension NAACL 2019

We introduce a new English reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs.

Measuring Compositional Generalization: A Comprehensive Method on Realistic Data

google-research/google-research ICLR 2020

We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures.

Schema2QA: High-Quality and Low-Cost Q&A Agents for the Structured Web

stanford-oval/genie-toolkit 16 Jan 2020

The key concept is to cover the space of possible compound queries on the database with a large number of in-domain questions synthesized with the help of a corpus of generic query templates.

AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data

stanford-oval/genie-toolkit EMNLP 2020

To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset.

Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

awslabs/gap-text2sql 18 Dec 2020

Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM).

PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

ElementAI/picard EMNLP 2021

Large pre-trained language models for textual data have an unconstrained output space; at each decoding step, they can produce any of 10, 000s of sub-word tokens.

Prompt Injection: Parameterization of Fixed Inputs

unbiarirang/prompt-injection 31 May 2022

Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts.

OpenICL: An Open-Source Framework for In-context Learning

shark-nlp/openicl 6 Mar 2023

However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks.

A Probabilistic Generative Grammar for Semantic Parsing

asaparov/parser CONLL 2017

The work relies on a novel application of hierarchical Dirichlet processes (HDPs) for structured prediction, which we also present in this manuscript.