Search Results for author: Myra Cheng

Found 9 papers, 5 papers with code

NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps

no code implementations2 Apr 2024 Kristina Gligoric, Myra Cheng, Lucia Zheng, Esin Durmus, Dan Jurafsky

The use of words to convey speaker's intent is traditionally distinguished from the `mention' of words for quoting what someone said, or pointing out properties of a word.

Hate Speech Detection Misinformation

AnthroScore: A Computational Linguistic Measure of Anthropomorphism

1 code implementation3 Feb 2024 Myra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky

Anthropomorphism, or the attribution of human-like characteristics to non-human entities, has shaped conversations about the impacts and possibilities of technology.

Language Modelling Misinformation

CoMPosT: Characterizing and Evaluating Caricature in LLM Simulations

1 code implementation17 Oct 2023 Myra Cheng, Tiziano Piccardi, Diyi Yang

Recent work has aimed to capture nuances of human behavior by using LLMs to simulate responses from particular demographics in settings like social science experiments and public opinion surveys.

Caricature

The Surveillance AI Pipeline

no code implementations26 Sep 2023 Pratyusha Ria Kalluri, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane

Moreover, the majority of these technologies specifically enable extracting data about human bodies and body parts.

Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models

1 code implementation29 May 2023 Myra Cheng, Esin Durmus, Dan Jurafsky

To recognize and mitigate harms from large language models (LLMs), we need to understand the prevalence and nuances of stereotypes in LLM outputs.

Story Generation

Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale

1 code implementation7 Nov 2022 Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan

For example, we find cases of prompting for basic traits or social roles resulting in images reinforcing whiteness as ideal, prompting for occupations resulting in amplification of racial and gender disparities, and prompting for objects resulting in reification of American norms.

Text-to-Image Generation

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

no code implementations25 Aug 2021 Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai

Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race.

Attribute Decision Making +1

Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

1 code implementation13 Mar 2020 Maegan Tucker, Myra Cheng, Ellen Novoseller, Richard Cheng, Yisong Yue, Joel W. Burdick, Aaron D. Ames

Optimizing lower-body exoskeleton walking gaits for user comfort requires understanding users' preferences over a high-dimensional gait parameter space.

Cannot find the paper you are looking for? You can Submit a new open access paper.