Constituency Parsing

73 papers with code • 4 benchmarks • 6 datasets

Constituency parsing aims to extract a constituency-based parse tree from a sentence that represents its syntactic structure according to a phrase structure grammar.

Example:

             Sentence (S)
                 |
   +-------------+------------+
   |                          |
 Noun (N)                Verb Phrase (VP)
   |                          |
 John                 +-------+--------+
                      |                |
                    Verb (V)         Noun (N)
                      |                |
                    sees              Bill

Recent approaches convert the parse tree into a sequence following a depth-first traversal in order to be able to apply sequence-to-sequence models to it. The linearized version of the above parse tree looks as follows: (S (N) (VP V N)).

Court Judgement Labeling Using Topic Modeling and Syntactic Parsing

hkulyc/legalai 3 Aug 2022

In regions that practice common law, relevant historical cases are essential references for sentencing.

0
03 Aug 2022

Nested Named Entity Recognition as Latent Lexicalized Constituency Parsing

louchao98/nner_as_parsing ACL 2022

They treat nested entities as partially-observed constituency trees and propose the masked inside algorithm for partial marginalization.

14
09 Mar 2022

CPTAM: Constituency Parse Tree Aggregation Method

kulkarniadithya/cptam 19 Jan 2022

This paper adopts the truth discovery idea to aggregate constituency parse trees from different parsers by estimating their reliability in the absence of ground truth.

0
19 Jan 2022

Bottom-Up Constituency Parsing and Nested Named Entity Recognition with Pointer Networks

sustcsonglin/pointer-net-for-nested ACL 2022

Constituency parsing and nested named entity recognition (NER) are similar tasks since they both aim to predict a collection of nested and non-crossing spans.

29
11 Oct 2021

Investigating Non-local Features for Neural Constituency Parsing

ringos/nfc-parser ACL 2022

Besides, our method achieves state-of-the-art BERT-based performance on PTB (95. 92 F1) and strong performance on CTB (92. 31 F1).

6
27 Sep 2021

Dependency Induction Through the Lens of Visual Perception

ruisi-su/concrete_dep CoNLL (EMNLP) 2021

Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50\% as compared to state-of-the-art models trained on pure text.

2
20 Sep 2021

Improved Latent Tree Induction with Distant Supervision via Span Constraints

iesl/distantly-supervised-diora EMNLP 2021

For over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing.

2
10 Sep 2021

ELIT: Emory Language and Information Toolkit

emorynlp/elit 8 Sep 2021

We introduce ELIT, the Emory Language and Information Toolkit, which is a comprehensive NLP framework providing transformer-based end-to-end models for core tasks with a special focus on memory efficiency while maintaining state-of-the-art accuracy and speed.

36
08 Sep 2021

Headed-Span-Based Projective Dependency Parsing

sustcsonglin/span-based-dependency-parsing ACL 2022

In a projective dependency tree, the largest subtree rooted at each word covers a contiguous sequence (i. e., a span) in the surface order.

15
10 Aug 2021