2 code implementations • 6 Mar 2024 • Yikun Sun, Zhen Wan, Nobuhiro Ueda, Sakiko Yahata, Fei Cheng, Chenhui Chu, Sadao Kurohashi
The creation of instruction data and evaluation benchmarks for serving Large language models often involves enormous human annotation.
no code implementations • 5 Oct 2023 • Zhen Wan, Yating Zhang, Yexiang Wang, Fei Cheng, Sadao Kurohashi
In the zero-shot setting of four Chinese legal tasks, our method improves accuracy by 33. 3\% compared to the direct generation by GPT-4.
no code implementations • 16 Jun 2023 • Xiaofei Sun, Linfeng Dong, Xiaoya Li, Zhen Wan, Shuhe Wang, Tianwei Zhang, Jiwei Li, Fei Cheng, Lingjuan Lyu, Fei Wu, Guoyin Wang
In this work, we propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks.
1 code implementation • 3 May 2023 • Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Jiwei Li, Sadao Kurohashi
In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e. g., GPT-3), they still lag significantly behind fully-supervised baselines (e. g., fine-tuned BERT) in relation extraction (RE).
1 code implementation • 21 Oct 2022 • Zhen Wan, Qianying Liu, Zhuoyuan Mao, Fei Cheng, Sadao Kurohashi, Jiwei Li
Relation extraction (RE) has achieved remarkable progress with the help of pre-trained language models.
1 code implementation • 21 Sep 2022 • Yibin Shen, Qianying Liu, Zhuoyuan Mao, Zhen Wan, Fei Cheng, Sadao Kurohashi
To solve Math Word Problems, human students leverage diverse reasoning logic that reaches different possible equation solutions.
no code implementations • 18 May 2022 • Zhen Wan, Fei Cheng, Qianying Liu, Zhuoyuan Mao, Haiyue Song, Sadao Kurohashi
Contrastive pre-training on distant supervision has shown remarkable effectiveness in improving supervised relation extraction tasks.
no code implementations • Findings (NAACL) 2022 • Zhuoyuan Mao, Chenhui Chu, Raj Dabre, Haiyue Song, Zhen Wan, Sadao Kurohashi
Meanwhile, the contrastive objective can implicitly utilize automatically learned word alignment, which has not been explored in many-to-many NMT.
no code implementations • 2 Feb 2021 • Zhen Wan, William Oliver, Holger Baumgardt, Geraint Lewis, Mark Gieles, Vincent Hénault-Brunet, Thomas de Boer, Eduardo Balbinot, Gary Da Costa, Dougal Mackey
We also estimate the effect on the velocity dispersion of different amounts of stellar-mass black holes and unbound stars from the tidal tails with varying escape rates and find that these effects cannot explain the difference between the LOS dispersion and the N-body model.
Astrophysics of Galaxies