1 code implementation • 17 Mar 2024 • Honglin Mu, Yang Xu, Yunlong Feng, Xiaofeng Han, Yitong Li, Yutai Hou, Wanxiang Che
With the rise of Large Language Models (LLMs), AI assistants' ability to utilize tools, especially through API calls, has advanced notably.
1 code implementation • 30 Jan 2024 • Shijue Huang, Wanjun Zhong, Jianqiao Lu, Qi Zhu, Jiahui Gao, Weiwen Liu, Yutai Hou, Xingshan Zeng, Yasheng Wang, Lifeng Shang, Xin Jiang, Ruifeng Xu, Qun Liu
The recent trend of using Large Language Models (LLMs) as tool agents in real-world applications underscores the necessity for comprehensive evaluations of their capabilities, particularly in complex scenarios involving planning, creating, and using tools.
2 code implementations • 22 Nov 2023 • Yilun Liu, Shimin Tao, Xiaofeng Zhao, Ming Zhu, Wenbing Ma, Junhao Zhu, Chang Su, Yutai Hou, Miao Zhang, Min Zhang, Hongxia Ma, Li Zhang, Hao Yang, Yanfei Jiang
Instruction tuning is crucial for enabling Language Learning Models (LLMs) in responding to human instructions.
no code implementations • 19 Apr 2023 • Bohan Li, Longxu Dou, Yutai Hou, Yunlong Feng, Honglin Mu, Qingfu Zhu, Qinghua Sun, Wanxiang Che
Prompt-based learning has shown considerable promise in reformulating various downstream tasks as cloze problems by combining original input with a predetermined template.
no code implementations • 4 Feb 2023 • Bohan Li, Xiao Xu, Xinghao Wang, Yutai Hou, Yunlong Feng, Feng Wang, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che
In contrast, generative methods bring more image diversity in the augmented images but may not preserve semantic consistency, thus incorrectly changing the essential semantics of the original image.
1 code implementation • COLING 2022 • Yutai Hou, Hongyuan Dong, Xinghao Wang, Bohan Li, Wanxiang Che
Prompting method is regarded as one of the crucial progress for few-shot nature language processing.
1 code implementation • 25 May 2022 • Yang Xu, Yutai Hou, Wanxiang Che, Min Zhang
On the newly defined cross-lingual model editing task, we empirically demonstrate the failure of monolingual baselines in propagating the edit to multiple languages and the effectiveness of the proposed language anisotropic model editing.
1 code implementation • Findings (ACL) 2022 • Yutai Hou, Cheng Chen, Xianzhen Luo, Bohan Li, Wanxiang Che
Such inverse prompting only requires a one-turn prediction for each slot type and greatly speeds up the prediction.
1 code implementation • 5 Oct 2021 • Bohan Li, Yutai Hou, Wanxiang Che
One of the main focuses of the DA methods is to improve the diversity of training data, thereby helping the model to better generalize to unseen testing data.
no code implementations • 27 Sep 2021 • Yutai Hou, Yingce Xia, Lijun Wu, Shufang Xie, Yang Fan, Jinhua Zhu, Wanxiang Che, Tao Qin, Tie-Yan Liu
We regard the DTI triplets as a sequence and use a Transformer-based model to directly generate them without using the detailed annotations of entities and relations.
no code implementations • Findings (ACL) 2021 • Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che, Ting Liu
However, dialogue language understanding contains two closely related tasks, i. e., intent detection and slot filling, and often benefits from jointly learning the two tasks.
1 code implementation • 13 Dec 2020 • Yutai Hou, Sanyuan Chen, Wanxiang Che, Cheng Chen, Ting Liu
Slot filling, a fundamental module of spoken language understanding, often suffers from insufficient quantity and diversity of training data.
no code implementations • 11 Oct 2020 • Yutai Hou, Yongkui Lai, Yushan Wu, Wanxiang Che, Ting Liu
In this paper, we study the few-shot multi-label classification for user intent detection.
3 code implementations • 17 Sep 2020 • Yutai Hou, Jiafeng Mao, Yongkui Lai, Cheng Chen, Wanxiang Che, Zhigang Chen, Ting Liu
In this paper, we present FewJoint, a novel Few-Shot Learning benchmark for NLP.
2 code implementations • ACL 2020 • Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, Ting Liu
In this paper, we explore the slot tagging with only a few labeled support sentences (a. k. a.
1 code implementation • EMNLP 2020 • Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu
Deep pretrained language models have achieved great success in the way of pretraining first and then fine-tuning.
1 code implementation • 10 Sep 2019 • Yutai Hou, Meng Fang, Wanxiang Che, Ting Liu
The framework builds a user simulator by first generating diverse dialogue data from templates and then build a new State2Seq user simulator on the data.
no code implementations • 20 Jun 2019 • Yutai Hou, Zhihan Zhou, Yijia Liu, Ning Wang, Wanxiang Che, Han Liu, Ting Liu
It calculates emission score with similarity based methods and obtains transition score with a specially designed transfer mechanism.
1 code implementation • COLING 2018 • Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu
In this paper, we study the problem of data augmentation for language understanding in task-oriented dialogue system.
no code implementations • IJCNLP 2017 • Jinpeng Wang, Yutai Hou, Jing Liu, Yunbo Cao, Chin-Yew Lin
We present in this paper a statistical framework that generates accurate and fluent product description from product attributes.