no code implementations • 9 May 2024 • Zhuoxuan Jiang, Haoyuan Peng, Shanshan Feng, Fan Li, Dongsheng Li
Self-correction is emerging as a promising approach to mitigate the issue of hallucination in Large Language Models (LLMs).
no code implementations • COLING 2022 • Ziming Huang, Zhuoxuan Jiang, Ke Wang, Juntao Li, Shanshan Feng, Xian-Ling Mao
Although most existing methods can fulfil this requirement, they can only model single-source dialog data and cannot effectively capture the underlying knowledge of relations among data and subtasks.
no code implementations • 10 Oct 2022 • Zhuoxuan Jiang, Lingfeng Qiao, Di Yin, Shanshan Feng, Bo Ren
Recent language generative models are mostly trained on large-scale datasets, while in some real scenarios, the training datasets are often expensive to obtain and would be small-scale.
no code implementations • 4 Jul 2022 • Ye Liu, Lingfeng Qiao, Di Yin, Zhuoxuan Jiang, Xinghua Jiang, Deqiang Jiang, Bo Ren
In this paper, from an alternate perspective to overcome the above challenges, we unite these two tasks into one task by a new form of predicting shots link: a link connects two adjacent shots, indicating that they belong to the same scene or category.
1 code implementation • NAACL 2022 • Yuan Liang, Zhuoxuan Jiang, Di Yin, Bo Ren
To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance.
Ranked #1 on Document-level Event Extraction on ChFinAnn
1 code implementation • 7 Jun 2021 • Sanshi Yu, Zhuoxuan Jiang, Dong-Dong Chen, Shanshan Feng, Dongsheng Li, Qi Liu, JinFeng Yi
Hence, the key is to make full use of rich interaction information among streamers, users, and products.
no code implementations • COLING 2020 • Yipeng Yu, Ran Guan, Jie Ma, Zhuoxuan Jiang, Jingchang Huang
In online customer service applications, multiple chatbots that are specialized in various topics are typically developed separately and are then merged with other human agents to a single platform, presenting to the users with a unified interface.
no code implementations • 11 Nov 2019 • Zhuoxuan Jiang, Ziming Huang, Dong Sheng Li, Xian-Ling Mao
In this paper, we propose a novel joint end-to-end model by multi-task representation learning, which can capture the knowledge from heterogeneous information through automatically learning knowledgeable low-dimensional embeddings from data, named with DialogAct2Vec.
no code implementations • WS 2019 • Zhuoxuan Jiang, Xian-Ling Mao, Ziming Huang, Jie Ma, Shaochun Li
Learning an efficient manager of dialogue agent from data with little manual intervention is important, especially for goal-oriented dialogues.
no code implementations • EMNLP 2017 • Zhuoxuan Jiang, Shanshan Feng, Gao Cong, Chunyan Miao, Xiaoming Li
Recent years have witnessed the proliferation of Massive Open Online Courses (MOOCs).