Search Results for author: Alison Smith-Renner

Found 5 papers, 1 papers with code

Human-Centered Evaluation of Explanations

no code implementations NAACL (ACL) 2022 Jordan Boyd-Graber, Samuel Carton, Shi Feng, Q. Vera Liao, Tania Lombrozo, Alison Smith-Renner, Chenhao Tan

The NLP community are increasingly interested in providing explanations for NLP models to help people make sense of model behavior and potentially improve human interaction with models.

Harnessing the Power of LLMs: Evaluating Human-AI Text Co-Creation through the Lens of News Headline Generation

1 code implementation16 Oct 2023 Zijian Ding, Alison Smith-Renner, Wenjuan Zhang, Joel R. Tetreault, Alejandro Jaimes

To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e. g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation.

Headline Generation

Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies

no code implementations21 Dec 2021 Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, Chenhao Tan

Besides developing AI technologies for this purpose, the emerging field of human-AI decision making must embrace empirical approaches to form a foundational understanding of how humans interact and work with AI to make decisions.

Decision Making

Why Didn't You Listen to Me? Comparing User Control of Human-in-the-Loop Topic Models

no code implementations ACL 2019 Varun Kumar, Alison Smith-Renner, Leah Findlater, Kevin Seppi, Jordan Boyd-Graber

To address the lack of comparative evaluation of Human-in-the-Loop Topic Modeling (HLTM) systems, we implement and evaluate three contrasting HLTM modeling approaches using simulation experiments.

Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.