Search Results for author: Haotian Luo

Found 5 papers, 2 papers with code

Latent Distance Guided Alignment Training for Large Language Models

no code implementations9 Apr 2024 Haotian Luo

Consequently, we utilize the distance between sample pairs in the latent space to guide DPO-based alignment training.

LLM Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models

1 code implementation8 Apr 2024 Shibo Hao, Yi Gu, Haotian Luo, Tianyang Liu, Xiyan Shao, Xinyuan Wang, Shuhua Xie, Haodi Ma, Adithya Samavedhi, Qiyue Gao, Zhen Wang, Zhiting Hu

(2) We develop LLM Reasoners, a library for standardized modular implementation of existing and new reasoning algorithms, under a unified formulation of the search, reward, and world model components.

Vector-Quantized Prompt Learning for Paraphrase Generation

no code implementations25 Nov 2023 Haotian Luo, Yixin Liu, Peidong Liu, Xianggen Liu

Therefore, we present vector-quantized prompts as the cues to control the generation of pre-trained models.

Paraphrase Generation

ProSG: Using Prompt Synthetic Gradients to Alleviate Prompt Forgetting of RNN-like Language Models

no code implementations3 Nov 2023 Haotian Luo, Kunming Wu, Cheng Dai, Sixian Ding, Xinhao Chen

RNN-like language models are getting renewed attention from NLP researchers in recent years and several models have made significant progress, which demonstrates performance comparable to traditional transformers.

Language Modelling

PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization

1 code implementation25 Oct 2023 Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric P. Xing, Zhiting Hu

Highly effective, task-specific prompts are often heavily engineered by experts to integrate detailed instructions and domain insights based on a deep understanding of both instincts of large language models (LLMs) and the intricacies of the target task.

Navigate

Cannot find the paper you are looking for? You can Submit a new open access paper.