no code implementations • Findings (EMNLP) 2021 • Anne Lauscher, Tobias Lüken, Goran Glavaš
Unfair stereotypical biases (e. g., gender, racial, or religious biases) encoded in modern pretrained language models (PLMs) have negative ethical implications for widespread adoption of state-of-the-art language technology.