1 code implementation • 30 Jan 2024 • Steffi Chern, Ethan Chern, Graham Neubig, PengFei Liu
Despite the utility of Large Language Models (LLMs) across a wide range of tasks and scenarios, developing a method for reliably evaluating LLMs across varied contexts continues to be challenging.
1 code implementation • 11 Jan 2024 • Steffi Chern, Zhen Fan, Andy Liu
While state-of-the-art language models have achieved impressive results, they remain susceptible to inference-time adversarial attacks, such as adversarial prompts generated by red teams arXiv:2209. 07858.
1 code implementation • 26 Dec 2023 • Chunpu Xu, Steffi Chern, Ethan Chern, Ge Zhang, Zekun Wang, Ruibo Liu, Jing Li, Jie Fu, PengFei Liu
In this paper, we aim to align large language models with the ever-changing, complex, and diverse human values (e. g., social norms) across time and locations.
4 code implementations • 25 Jul 2023 • I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, PengFei Liu
With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e. g., ChatGPT).