1 code implementation • 2 Apr 2024 • Xuechen Liang, Meiling Tao, Tianyu Shi, Yiting Xie
Open large language models (LLMs) have significantly advanced the field of natural language processing, showcasing impressive performance across various tasks. Despite the significant advancements in LLMs, their effective operation still relies heavily on human input to accurately guide the dialogue flow, with agent tuning being a crucial optimization technique that involves human adjustments to the model for better response to such guidance. Addressing this dependency, our work introduces the TinyAgent model, trained on a meticulously curated high-quality dataset.
1 code implementation • 17 Dec 2023 • Meiling Tao, Xuechen Liang, Tianyu Shi, Lei Yu, Yiting Xie
This study presents RoleCraft-GLM, an innovative framework aimed at enhancing personalized role-playing with Large Language Models (LLMs).