no code implementations • CAI (COLING) 2022 • Jie Zeng, Tatsuya Sakato, Yukiko Nakano
Next, using the corpus as training data, we created a classification model for the communicative function of the interviewer’s next utterance and a generative model that predicts the semantic content of the utterance based on the dialogue history.
1 code implementation • 24 Apr 2024 • Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, Yanghua Xiao
It is imperative for Large language models (LLMs) to follow instructions with elaborate requirements (i. e. Complex Instructions Following).
2 code implementations • 17 Sep 2023 • Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao
To bridge this gap, we propose CELLO, a benchmark for evaluating LLMs' ability to follow complex instructions systematically.
no code implementations • 20 Aug 2023 • Jie Zeng, Zeyu Han, Xingchen Peng, Jianghong Xiao, Peng Wang, Yan Wang
Recently, deep learning (DL) has automated and accelerated the clinical radiation therapy (RT) planning significantly by predicting accurate dose maps.