Search Results for author: Taiwei Shi

Found 5 papers, 3 papers with code

How Susceptible are Large Language Models to Ideological Manipulation?

1 code implementation18 Feb 2024 Kai Chen, Zihao He, Jun Yan, Taiwei Shi, Kristina Lerman

Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information.

Can Language Model Moderators Improve the Health of Online Discourse?

no code implementations16 Nov 2023 Hyundong Cho, Shuai Liu, Taiwei Shi, Darpan Jain, Basem Rizk, YuYang Huang, Zixun Lu, Nuan Wen, Jonathan Gratch, Emilio Ferrera, Jonathan May

Human moderation of online conversation is essential to maintaining civility and focus in a dialogue, but is challenging to scale and harmful to moderators.

Language Modelling Text Generation

Safer-Instruct: Aligning Language Models with Automated Preference Data

1 code implementation15 Nov 2023 Taiwei Shi, Kai Chen, Jieyu Zhao

To verify the effectiveness of Safer-Instruct, we apply the pipeline to construct a safety preference dataset as a case study.

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation

1 code implementation24 Oct 2023 Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy F. Chen, Zhengyuan Liu, Diyi Yang

Annotated data plays a critical role in Natural Language Processing (NLP) in training models and evaluating their performance.

text annotation

Neural Story Planning

no code implementations16 Dec 2022 Anbang Ye, Christopher Cui, Taiwei Shi, Mark O. Riedl

Traditional symbolic planners plan a story from a goal state and guarantee logical causal plot coherence but rely on a library of hand-crafted actions with their preconditions and effects.

Cannot find the paper you are looking for? You can Submit a new open access paper.