Search Results for author: Junru Lu

Found 6 papers, 6 papers with code

FIPO: Free-form Instruction-oriented Prompt Optimization with Preference Dataset and Modular Fine-tuning Schema

1 code implementation19 Feb 2024 Junru Lu, Siyu An, Min Zhang, Yulan He, Di Yin, Xing Sun

In the quest to facilitate the deep intelligence of Large Language Models (LLMs) accessible in final-end user-bot interactions, the art of prompt crafting emerges as a critical yet complex task for the average user.

MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain Conversation

1 code implementation16 Aug 2023 Junru Lu, Siyu An, Mingbao Lin, Gabriele Pergola, Yulan He, Di Yin, Xing Sun, Yunsheng Wu

We propose MemoChat, a pipeline for refining instructions that enables large language models (LLMs) to effectively employ self-composed memos for maintaining consistent long-range open-domain conversations.

Memorization Retrieval

Event Knowledge Incorporation with Posterior Regularization for Event-Centric Question Answering

1 code implementation8 May 2023 Junru Lu, Gabriele Pergola, Lin Gui, Yulan He

In particular, we define event-related knowledge constraints based on the event trigger annotations in the QA datasets, and subsequently use them to regularize the posterior answer output probabilities from the backbone pre-trained language models used in the QA setting.

Language Modelling Question Answering +1

NapSS: Paragraph-level Medical Text Simplification via Narrative Prompting and Sentence-matching Summarization

1 code implementation11 Feb 2023 Junru Lu, Jiazheng Li, Byron C. Wallace, Yulan He, Gabriele Pergola

In this work, we propose a summarize-then-simplify two-stage strategy, which we call NapSS, identifying the relevant content to simplify while ensuring that the original narrative flow is preserved.

Semantic Similarity Semantic Textual Similarity +2

Event-Centric Question Answering via Contrastive Learning and Invertible Event Transformation

1 code implementation24 Oct 2022 Junru Lu, Xingwei Tan, Gabriele Pergola, Lin Gui, Yulan He

Our proposed model utilizes an invertible transformation matrix to project semantic vectors of events into a common event embedding space, trained with contrastive learning, and thus naturally inject event semantic knowledge into mainstream QA pipelines.

Contrastive Learning Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.