1 code implementation • 4 Apr 2024 • Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Ranran Haoran Zhang, Sujeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, Rui Zhang
This work introduces ReaLMistake, the first error detection benchmark consisting of objective, realistic, and diverse errors made by LLMs.
1 code implementation • 7 Nov 2023 • Sarkar Snigdha Sarathi Das, Ranran Haoran Zhang, Peng Shi, Wenpeng Yin, Rui Zhang
Unfortunately, this requires formatting them into specialized augmented format unknown to the base pretrained language model (PLMs) necessitating finetuning to the target format.
1 code implementation • 23 Oct 2023 • Aysa Xuemo Fan, Ranran Haoran Zhang, Luc Paquette, Rui Zhang
In this paper, we explore the application of large language models (LLMs) for generating code-tracing questions in introductory programming courses.
1 code implementation • 14 Oct 2022 • Ranran Haoran Zhang, Aysa Xuemo Fan, Rui Zhang
To fill these gaps, we propose ConEntail, a new framework for universal zero and few shot classification with supervised contrastive pretraining.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ranran Haoran Zhang, Qianying Liu, Aysa Xuemo Fan, Heng Ji, Daojian Zeng, Fei Cheng, Daisuke Kawahara, Sadao Kurohashi
We propose a novel Sequence-to-Unordered-Multi-Tree (Seq2UMTree) model to minimize the effects of exposure bias by limiting the decoding length to three within a triplet and removing the order among triplets.
2 code implementations • 24 Nov 2019 • Daojian Zeng, Ranran Haoran Zhang, Qianying Liu
The model is extremely weak at differing the head and tail entity, resulting in inaccurate entity extraction.
Ranked #12 on Relation Extraction on WebNLG