Search Results for author: Xiang Hu

Found 6 papers, 5 papers with code

Generative Pretrained Structured Transformers: Unsupervised Syntactic Language Models at Scale

2 code implementations13 Mar 2024 Xiang Hu, Pengyu Ji, Qingyang Zhu, Wei Wu, Kewei Tu

A syntactic language model (SLM) incrementally generates a sentence with its syntactic tree in a left-to-right manner.

Language Modelling Sentence +1

Augmenting Transformers with Recursively Composed Multi-grained Representations

1 code implementation28 Sep 2023 Xiang Hu, Qingyang Zhu, Kewei Tu, Wei Wu

More interestingly, the hierarchical structures induced by ReCAT exhibit strong consistency with human-annotated syntactic trees, indicating good interpretability brought by the CIO layers.

Constituency Grammar Induction Natural Language Inference +1

A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification

2 code implementations6 Mar 2023 Xiang Hu, Xinyu Kong, Kewei Tu

As the structured language model learns to predict constituency trees in a self-supervised manner, only raw texts and sentence-level labels are required as training data, which makes it essentially a general constituent-level self-interpretable classification model.

Language Modelling Sentence +2

R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling

1 code implementation ACL 2021 Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo

Human language understanding operates at multiple levels of granularity (e. g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.