1 code implementation • 19 Feb 2024 • Junru Lu, Siyu An, Min Zhang, Yulan He, Di Yin, Xing Sun
In the quest to facilitate the deep intelligence of Large Language Models (LLMs) accessible in final-end user-bot interactions, the art of prompt crafting emerges as a critical yet complex task for the average user.
1 code implementation • 16 Aug 2023 • Junru Lu, Siyu An, Mingbao Lin, Gabriele Pergola, Yulan He, Di Yin, Xing Sun, Yunsheng Wu
We propose MemoChat, a pipeline for refining instructions that enables large language models (LLMs) to effectively employ self-composed memos for maintaining consistent long-range open-domain conversations.
1 code implementation • 8 May 2023 • Junru Lu, Gabriele Pergola, Lin Gui, Yulan He
In particular, we define event-related knowledge constraints based on the event trigger annotations in the QA datasets, and subsequently use them to regularize the posterior answer output probabilities from the backbone pre-trained language models used in the QA setting.
1 code implementation • 11 Feb 2023 • Junru Lu, Jiazheng Li, Byron C. Wallace, Yulan He, Gabriele Pergola
In this work, we propose a summarize-then-simplify two-stage strategy, which we call NapSS, identifying the relevant content to simplify while ensuring that the original narrative flow is preserved.
1 code implementation • 24 Oct 2022 • Junru Lu, Xingwei Tan, Gabriele Pergola, Lin Gui, Yulan He
Our proposed model utilizes an invertible transformation matrix to project semantic vectors of events into a common event embedding space, trained with contrastive learning, and thus naturally inject event semantic knowledge into mainstream QA pipelines.
1 code implementation • COLING 2020 • Junru Lu, Gabriele Pergola, Lin Gui, Binyang Li, Yulan He
We introduce CHIME, a cross-passage hierarchical memory network for question answering (QA) via text generation.