2 code implementations • 13 Mar 2024 • Xiang Hu, Pengyu Ji, Qingyang Zhu, Wei Wu, Kewei Tu
A syntactic language model (SLM) incrementally generates a sentence with its syntactic tree in a left-to-right manner.
1 code implementation • 28 Sep 2023 • Xiang Hu, Qingyang Zhu, Kewei Tu, Wei Wu
More interestingly, the hierarchical structures induced by ReCAT exhibit strong consistency with human-annotated syntactic trees, indicating good interpretability brought by the CIO layers.
Constituency Grammar Induction Natural Language Inference +1
2 code implementations • 6 Mar 2023 • Xiang Hu, Xinyu Kong, Kewei Tu
As the structured language model learns to predict constituency trees in a self-supervised manner, only raw texts and sentence-level labels are required as training data, which makes it essentially a general constituent-level self-interpretable classification model.
2 code implementations • 1 Mar 2022 • Xiang Hu, Haitao Mi, Liang Li, Gerard de Melo
We propose to use a top-down parser as a model-based pruning method, which also enables parallel encoding during inference.
1 code implementation • ACL 2021 • Xiang Hu, Haitao Mi, Zujie Wen, Yafang Wang, Yi Su, Jing Zheng, Gerard de Melo
Human language understanding operates at multiple levels of granularity (e. g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.
no code implementations • COLING 2020 • Xiang Hu, Zujie Wen, Yafang Wang, Xiaolong Li, Gerard de Melo
In this work, we propose a reinforcement model to clarify ambiguous questions by suggesting refinements of the original query.