AMR Parsing

49 papers with code • 8 benchmarks • 6 datasets

Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on. See.

Most implemented papers

An Incremental Parser for Abstract Meaning Representation

mdtux89/amr-evaluation EACL 2017

We describe a transition-based parser for AMR that parses sentences left-to-right, in linear time.

Neural AMR: Sequence-to-Sequence Models for Parsing and Generation

freesunshine0316/neural-graph-to-seq-mp ACL 2017

Sequence-to-sequence models have shown strong performance across a broad range of applications.

AMR Parsing via Graph-Sequence Iterative Inference

jcyk/AMR-gs ACL 2020

We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph.

RIGA at SemEval-2016 Task 8: Impact of Smatch Extensions and Character-Level Neural Translation on AMR Parsing Accuracy

didzis/tensorflowAMR SEMEVAL 2016

The first extension com-bines the smatch scoring script with the C6. 0 rule-based classifier to produce a human-readable report on the error patterns frequency observed in the scored AMR graphs.

Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations

RikVN/AMR 28 May 2017

We evaluate the character-level translation method for neural semantic parsing on a large corpus of sentences annotated with Abstract Meaning Representations (AMRs).

AMR Parsing as Graph Prediction with Latent Alignment

ChunchuanLv/AMR_AS_GRAPH_PREDICTION ACL 2018

AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences.

SemBleu: A Robust Metric for AMR Parsing Evaluation

freesunshine0316/sembleu ACL 2019

Evaluating AMR parsing accuracy involves comparing pairs of AMR graphs.

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

IBM/transition-amr-parser NAACL 2022

AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning.

Graph Pre-training for AMR Parsing and Generation

muyeby/amrbart ACL 2022

To our knowledge, we are the first to consider pre-training on semantic graphs.

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

chenllliang/atp Findings (NAACL) 2022

As Abstract Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing.