Constituency Grammar Induction

15 papers with code • 1 benchmarks • 1 datasets

Inducing a constituency-based phrase structure grammar.

Most implemented papers

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

yikangshen/Ordered-Neurons ICLR 2019

When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed.

Compound Probabilistic Context-Free Grammars for Grammar Induction

harvardnlp/compound-pcfg ACL 2019

We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar.

Neural Language Modeling by Jointly Learning Syntax and Lexicon

nyu-mll/PRPN-Analysis ICLR 2018

In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.

Unsupervised Learning of Syntactic Structure with Invertible Neural Projections

jxhe/struct-learning-with-flow EMNLP 2018

In this work, we propose a novel generative model that jointly learns discrete syntactic structure and continuous word representations in an unsupervised fashion by cascading an invertible neural network with a structured generative prior.

Unsupervised Recurrent Neural Network Grammars

harvardnlp/urnng NAACL 2019

On language modeling, unsupervised RNNGs perform as well their supervised counterparts on benchmarks in English and Chinese.

Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders

iesl/diora NAACL 2019

We introduce the deep inside-outside recursive autoencoder (DIORA), a fully-unsupervised method for discovering syntax that simultaneously learns representations for constituents within the induced tree.

Visually Grounded Compound PCFGs

zhaoyanpeng/vpcfg EMNLP 2020

In this work, we study visually grounded grammar induction and learn a constituency parser from both unlabeled text and its visual groundings.

PCFGs Can Do Better: Inducing Probabilistic Context-Free Grammars with Many Symbols

sustcsonglin/TN-PCFG NAACL 2021

In this work, we present a new parameterization form of PCFGs based on tensor decomposition, which has at most quadratic computational complexity in the symbol number and therefore allows us to use a much larger number of symbols.

Neural Bi-Lexicalized PCFG Induction

sustcsonglin/TN-PCFG ACL 2021

Neural lexicalized PCFGs (L-PCFGs) have been shown effective in grammar induction.