StereoSet: Measuring stereotypical bias in pretrained language models

ACL 2021  ·  Moin Nadeem, Anna Bethke, Siva Reddy ·

A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs (biases) are known to hurt target groups. Since pretrained language models are trained on large real world data, they are known to capture stereotypical biases. In order to assess the adverse effects of these models, it is important to quantify the bias captured in them. Existing literature on quantifying bias evaluates pretrained language models on a small set of artificially constructed bias-assessing sentences. We present StereoSet, a large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. We evaluate popular models like BERT, GPT-2, RoBERTa, and XLNet on our dataset and show that these models exhibit strong stereotypical biases. We also present a leaderboard with a hidden test set to track the bias of future language models at https://stereoset.mit.edu

PDF Abstract ACL 2021 PDF ACL 2021 Abstract

Datasets


Introduced in the Paper:

StereoSet

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Bias Detection StereoSet GPT-2 (small) ICAT Score 72.97 # 1
Bias Detection StereoSet XLNet (base) ICAT Score 62.10 # 9
Bias Detection StereoSet RoBERTa (base) ICAT Score 67.50 # 7
Bias Detection StereoSet BERT (large) ICAT Score 69.89 # 6
Bias Detection StereoSet GPT-2 (large) ICAT Score 70.54 # 5
Bias Detection StereoSet BERT (base) ICAT Score 71.21 # 4
Bias Detection StereoSet GPT-2 (medium) ICAT Score 71.73 # 3
Bias Detection StereoSet XLNet (large) ICAT Score 72.03 # 2

Methods