Search Results for author: Sherry Tongshuang Wu

Found 4 papers, 2 papers with code

Generating Situated Reflection Triggers about Alternative Solution Paths: A Case Study of Generative AI for Computer-Supported Collaborative Learning

no code implementations28 Apr 2024 Atharva Naik, Jessica Ruhan Yin, Anusha Kamath, Qianou Ma, Sherry Tongshuang Wu, Charles Murray, Christopher Bogart, Majd Sakr, Carolyn P. Rose

An advantage of Large Language Models (LLMs) is their contextualization capability - providing different responses based on student inputs like solution strategy or prior discussion, to potentially better engage students than standard feedback.

Cloud Computing

Do LLMs exhibit human-like response biases? A case study in survey design

1 code implementation7 Nov 2023 Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, Graham Neubig

As large language models (LLMs) become more capable, there is growing excitement about the possibility of using LLMs as proxies for humans in real-world tasks where subjective labels are desired, such as in surveys and opinion polling.

Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong

no code implementations19 Oct 2023 Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, Jordan Boyd-Graber

To reduce over-reliance on LLMs, we ask LLMs to provide contrastive information - explain both why the claim is true and false, and then we present both sides of the explanation to users.

Fact Checking Information Retrieval

"Merge Conflicts!" Exploring the Impacts of External Distractors to Parametric Knowledge Graphs

1 code implementation15 Sep 2023 Cheng Qian, Xinran Zhao, Sherry Tongshuang Wu

Large language models (LLMs) acquire extensive knowledge during pre-training, known as their parametric knowledge.

Hallucination Knowledge Graphs

Cannot find the paper you are looking for? You can Submit a new open access paper.