Search Results for author: Quan Tu

Found 13 papers, 8 papers with code

360°REA: Towards A Reusable Experience Accumulation with 360° Assessment for Multi-Agent System

no code implementations8 Apr 2024 Shen Gao, Hao Li, Zhengliang Shi, Chengrui Huang, Quan Tu, Zhiliang Tian, Minlie Huang, Shuo Shang

The framework employs a novel 360{\deg} performance assessment method for multi-perspective performance evaluation with fine-grained assessment.

Language Modelling Large Language Model

StyleChat: Learning Recitation-Augmented Memory in LLMs for Stylized Dialogue Generation

no code implementations18 Mar 2024 Jinpeng Li, Zekai Zhang, Quan Tu, Xin Cheng, Dongyan Zhao, Rui Yan

Furthermore, although many prompt-based methods have been proposed to accomplish specific tasks, their performance in complex real-world scenarios involving a wide variety of dialog styles further enhancement.

Dialogue Generation

StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses

no code implementations13 Mar 2024 Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, Ji-Rong Wen, Rui Yan

Accordingly, we introduce StreamingDialogue, which compresses long dialogue history into conv-attn sinks with minimal losses, and thus reduces computational complexity quadratically with the number of sinks (i. e., the number of utterances).

Generative News Recommendation

1 code implementation6 Mar 2024 Shen Gao, Jiabao Fang, Quan Tu, Zhitao Yao, Zhumin Chen, Pengjie Ren, Zhaochun Ren

In this paper, we propose a novel generative news recommendation paradigm that includes two steps: (1) Leveraging the internal knowledge and reasoning capabilities of the Large Language Model (LLM) to perform high-level matching between candidate news and user representation; (2) Generating a coherent and logically structured narrative based on the associations between related news and user interests, thus engaging users in further reading of the news.

Language Modelling Large Language Model +1

"In Dialogues We Learn": Towards Personalized Dialogue Without Pre-defined Profiles through In-Dialogue Learning

no code implementations5 Mar 2024 Chuanqi Cheng, Quan Tu, Wei Wu, Shuo Shang, Cunli Mao, Zhengtao Yu, Rui Yan

Personalized dialogue systems have gained significant attention in recent years for their ability to generate responses in alignment with different personas.

Dialogue Generation

CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation

1 code implementation2 Jan 2024 Quan Tu, Shilong Fan, Zihang Tian, Rui Yan

Recently, the advent of large language models (LLMs) has revolutionized generative agents.

RoleEval: A Bilingual Role Evaluation Benchmark for Large Language Models

1 code implementation26 Dec 2023 Tianhao Shen, Sun Li, Quan Tu, Deyi Xiong

We expect that RoleEval would highlight the significance of assessing role knowledge for large language models across various languages and cultural settings.

Memorization Multiple-choice

Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse

1 code implementation13 Nov 2023 Ang Lv, Kaiyi Zhang, Shufang Xie, Quan Tu, Yuhan Chen, Ji-Rong Wen, Rui Yan

Recent studies have highlighted a phenomenon in large language models (LLMs) known as "the reversal curse," in which the order of knowledge entities in the training data biases the models' comprehension.

Denoising Language Modelling

CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment

no code implementations25 Oct 2023 Jixiang Hong, Quan Tu, Changyu Chen, Xing Gao, Ji Zhang, Rui Yan

With in-context learning (ICL) as the core of the cycle, the black-box models are able to rank the model-generated responses guided by human-craft instruction and demonstrations about their preferences.

In-Context Learning Instruction Following +2

CharacterChat: Learning towards Conversational AI with Personalized Social Support

1 code implementation20 Aug 2023 Quan Tu, Chuanqi Chen, Jinpeng Li, Yanran Li, Shuo Shang, Dongyan Zhao, Ran Wang, Rui Yan

In our modern, fast-paced, and interconnected world, the importance of mental well-being has grown into a matter of great urgency.

MISC: A MIxed Strategy-Aware Model Integrating COMET for Emotional Support Conversation

1 code implementation ACL 2022 Quan Tu, Yanran Li, Jianwei Cui, Bin Wang, Ji-Rong Wen, Rui Yan

Applying existing methods to emotional support conversation -- which provides valuable assistance to people who are in need -- has two major limitations: (a) they generally employ a conversation-level emotion label, which is too coarse-grained to capture user's instant mental state; (b) most of them focus on expressing empathy in the response(s) rather than gradually reducing user's distress.

Cannot find the paper you are looking for? You can Submit a new open access paper.