Search Results for author: Jieyu Zhao

Found 31 papers, 22 papers with code

Using Item Response Theory to Measure Gender and Racial Bias of a BERT-based Automated English Speech Assessment System

no code implementations NAACL (BEA) 2022 Alexander Kwako, Yixin Wan, Jieyu Zhao, Kai-Wei Chang, Li Cai, Mark Hansen

This study addresses the need to examine potential biases of transformer-based models in the context of automated English speech assessment.

Multilingual large language models leak human stereotypes across language boundaries

1 code implementation12 Dec 2023 Yang Trista Cao, Anna Sotnikova, Jieyu Zhao, Linda X. Zou, Rachel Rudinger, Hal Daume III

We evaluate human stereotypes and stereotypical associations manifested in multilingual large language models such as mBERT, mT5, and ChatGPT.

Self-Contradictory Reasoning Evaluation and Detection

no code implementations16 Nov 2023 Ziyi Liu, Isabelle Lee, Yongkang Du, Soumya Sanyal, Jieyu Zhao

In a plethora of recent work, large language models (LLMs) demonstrated impressive reasoning ability, but many proposed downstream reasoning tasks focus on performance-wise evaluation.

Safer-Instruct: Aligning Language Models with Automated Preference Data

1 code implementation15 Nov 2023 Taiwei Shi, Kai Chen, Jieyu Zhao

To verify the effectiveness of Safer-Instruct, we apply the pipeline to construct a safety preference dataset as a case study.

Fair Abstractive Summarization of Diverse Perspectives

1 code implementation14 Nov 2023 Yusen Zhang, Nan Zhang, Yixin Liu, Alexander Fabbri, Junru Liu, Ryo Kamoi, Xiaoxin Lu, Caiming Xiong, Jieyu Zhao, Dragomir Radev, Kathleen McKeown, Rui Zhang

We first formally define fairness in abstractive summarization as not underrepresenting perspectives of any groups of people and propose four reference-free automatic metrics measuring the differences between target and source perspectives.

Abstractive Text Summarization Fairness

Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems

1 code implementation8 Oct 2023 Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang

Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations.

Benchmarking

Equal Long-term Benefit Rate: Adapting Static Fairness Notions to Sequential Decision Making

1 code implementation7 Sep 2023 Yuancheng Xu, ChengHao Deng, Yanchao Sun, Ruijie Zheng, Xiyao Wang, Jieyu Zhao, Furong Huang

Moreover, we show that the policy gradient of Long-term Benefit Rate can be analytically reduced to standard policy gradient.

Decision Making Fairness

TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning

1 code implementation22 Jun 2023 Ruijie Zheng, Xiyao Wang, Yanchao Sun, Shuang Ma, Jieyu Zhao, Huazhe Xu, Hal Daumé III, Furong Huang

Despite recent progress in reinforcement learning (RL) from raw pixel data, sample inefficiency continues to present a substantial obstacle.

Continuous Control Contrastive Learning +3

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

no code implementations16 Nov 2022 Anaelia Ovalle, Sunipa Dev, Jieyu Zhao, Majid Sarrafzadeh, Kai-Wei Chang

Therefore, ML auditing tools must be (1) better aligned with ML4H auditing principles and (2) able to illuminate and characterize communities vulnerable to the most harm.

Bias Detection Clustering +1

Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

no code implementations28 Oct 2022 Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, Kai-Wei Chang

Large pre-trained language models have shown remarkable performance over the past few years.

SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models

1 code implementation13 Oct 2022 Haozhe An, Zongxia Li, Jieyu Zhao, Rachel Rudinger

A common limitation of diagnostic tests for detecting social biases in NLP models is that they may only detect stereotypic associations that are pre-specified by the designer of the test.

Language Modelling Question Answering

On Measures of Biases and Harms in NLP

no code implementations7 Aug 2021 Sunipa Dev, Emily Sheng, Jieyu Zhao, Aubrie Amstutz, Jiao Sun, Yu Hou, Mattie Sanseverino, Jiin Kim, Akihiro Nishi, Nanyun Peng, Kai-Wei Chang

Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality.

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

1 code implementation Findings (ACL) 2021 Jieyu Zhao, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Kai-Wei Chang

We investigate the effectiveness of natural language interventions for reading-comprehension systems, studying this in the context of social stereotypes.

Ethics Few-Shot Learning +2

``The Boating Store Had Its Best Sail Ever'': Pronunciation-attentive Contextualized Pun Recognition

no code implementations ACL 2020 Yichao Zhou, Jyun-Yu Jiang, Jieyu Zhao, Kai-Wei Chang, Wei Wang

In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor, detect if a sentence contains puns and locate them in the sentence.

Sentence

Mitigating Gender Bias Amplification in Distribution by Posterior Regularization

1 code implementation ACL 2020 Shengyu Jia, Tao Meng, Jieyu Zhao, Kai-Wei Chang

With little performance loss, our method can almost remove the bias amplification in the distribution.

"The Boating Store Had Its Best Sail Ever": Pronunciation-attentive Contextualized Pun Recognition

1 code implementation29 Apr 2020 Yichao Zhou, Jyun-Yu Jiang, Jieyu Zhao, Kai-Wei Chang, Wei Wang

In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition (PCPR) to perceive human humor, detect if a sentence contains puns and locate them in the sentence.

Sentence

Short-Term Temporal Convolutional Networks for Dynamic Hand Gesture Recognition

no code implementations31 Dec 2019 Yi Zhang, Chong Wang, Ye Zheng, Jieyu Zhao, Yuqi Li, Xijiong Xie

Subsequently, in temporal analysis, we use TCNs to extract temporal features and employ improved Squeeze-and-Excitation Networks (SENets) to strengthen the representational power of temporal features from each TCNs' layers.

Hand Gesture Recognition Hand-Gesture Recognition

Towards Understanding Gender Bias in Relation Extraction

1 code implementation ACL 2020 Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang

We use WikiGenderBias to evaluate systems for bias and find that NRE systems exhibit gender biased predictions and lay groundwork for future evaluation of bias in NRE.

counterfactual Data Augmentation +3

Gender Bias in Contextualized Word Embeddings

2 code implementations NAACL 2019 Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang

In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors.

Word Embeddings

Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations

2 code implementations ICCV 2019 Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente Ordonez

In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks.

Temporal Action Localization

Learning Gender-Neutral Word Embeddings

1 code implementation EMNLP 2018 Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, Kai-Wei Chang

Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.