Constituency Parsing

73 papers with code • 4 benchmarks • 6 datasets

Constituency parsing aims to extract a constituency-based parse tree from a sentence that represents its syntactic structure according to a phrase structure grammar.

Example:

             Sentence (S)
                 |
   +-------------+------------+
   |                          |
 Noun (N)                Verb Phrase (VP)
   |                          |
 John                 +-------+--------+
                      |                |
                    Verb (V)         Noun (N)
                      |                |
                    sees              Bill

Recent approaches convert the parse tree into a sequence following a depth-first traversal in order to be able to apply sequence-to-sequence models to it. The linearized version of the above parse tree looks as follows: (S (N) (VP V N)).

Latest papers with no code

Fast Rule-Based Decoding: Revisiting Syntactic Rules in Neural Constituency Parsing

no code yet • 16 Dec 2022

Most recent studies on neural constituency parsing focus on encoder structures, while few developments are devoted to decoders.

Joint Chinese Word Segmentation and Span-based Constituency Parsing

no code yet • 3 Nov 2022

In constituency parsing, span-based decoding is an important direction.

Order-sensitive Neural Constituency Parsing

no code yet • 1 Nov 2022

We propose a novel algorithm that improves on the previous neural span-based CKY decoder for constituency parsing.

Shift-Reduce Task-Oriented Semantic Parsing with Stack-Transformers

no code yet • 21 Oct 2022

In this article, we advance the research on shift-reduce semantic parsing for task-oriented dialog.

Unsupervised Full Constituency Parsing with Neighboring Distribution Divergence

no code yet • ACL ARR January 2022

Unsupervised constituency parsing has been explored much but is still far from being solved as currently mainstream unsupervised constituency parser only captures the unlabeled structure of sentences.

Investigating Non-local Features for Neural Constituency Parsing

no code yet • ACL ARR January 2022

Besides, our method achieves state-of-the-art BERT-based performance on PTB (95. 92 F1) and strong performance on CTB (92. 31 F1).

A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language Models

no code yet • 14 Jan 2022

To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain (TLD).

Semantics-Preserved Distortion for Personal Privacy Protection in Information Management

no code yet • 4 Jan 2022

As the model training on information from users is likely to invade personal privacy, many methods have been proposed to block the learning and memorizing of the sensitive data in raw texts.

Re-thinking Supertags in Linear Context-free Rewriting Systems for Constituency Parsing

no code yet • ACL ARR November 2021

Recently, a supertagging-based approach for parsing discontinuous constituent trees with linear context-free rewriting systems (LCFRS) was introduced.

Phrase-aware Unsupervised Constituency Parsing

no code yet • ACL ARR November 2021

Recent studies have achieved inspiring success in unsupervised grammar induction using masked language modeling (MLM) as the proxy task.