Search Results for author: Ethan Chern

Found 4 papers, 4 papers with code

Reformatted Alignment

1 code implementation19 Feb 2024 Run-Ze Fan, Xuefeng Li, Haoyang Zou, Junlong Li, Shwai He, Ethan Chern, Jiewen Hu, PengFei Liu

This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach named ReAlign, which reformats the responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence.

GSM8K Hallucination +2

Can Large Language Models be Trusted for Evaluation? Scalable Meta-Evaluation of LLMs as Evaluators via Agent Debate

1 code implementation30 Jan 2024 Steffi Chern, Ethan Chern, Graham Neubig, PengFei Liu

Despite the utility of Large Language Models (LLMs) across a wide range of tasks and scenarios, developing a method for reliably evaluating LLMs across varied contexts continues to be challenging.

Align on the Fly: Adapting Chatbot Behavior to Established Norms

1 code implementation26 Dec 2023 Chunpu Xu, Steffi Chern, Ethan Chern, Ge Zhang, Zekun Wang, Ruibo Liu, Jing Li, Jie Fu, PengFei Liu

In this paper, we aim to align large language models with the ever-changing, complex, and diverse human values (e. g., social norms) across time and locations.

Chatbot

Alignment for Honesty

1 code implementation12 Dec 2023 Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neubig, PengFei Liu

Recent research has made significant strides in applying alignment techniques to enhance the helpfulness and harmlessness of large language models (LLMs) in accordance with human intentions.

Cannot find the paper you are looking for? You can Submit a new open access paper.