no code implementations • 4 Apr 2024 • Farnaz Kohankhaki, Jacob-Junqi Tian, David Emerson, Laleh Seyyed-Kalantari, Faiza Khan Khattak
This approach is widely used in bias quantification.
1 code implementation • 25 Aug 2023 • Kellin Pelrine, Anne Imouza, Zachary Yang, Jacob-Junqi Tian, Sacha Lévy, Gabrielle Desrosiers-Brisebois, Aarash Feizi, Cécile Amadoro, André Blais, Jean-François Godbout, Reihaneh Rabbany
A large number of studies on social media compare the behaviour of users from different political parties.
no code implementations • 24 Jul 2023 • Jacob-Junqi Tian, Omkar Dige, David Emerson, Faiza Khan Khattak
Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination.
no code implementations • 19 Jul 2023 • Omkar Dige, Jacob-Junqi Tian, David Emerson, Faiza Khan Khattak
As the breadth and depth of language model applications continue to expand rapidly, it is increasingly important to build efficient frameworks for measuring and mitigating the learned or inherited social biases of these models.
no code implementations • 7 Jun 2023 • Jacob-Junqi Tian, David Emerson, Sevil Zanjani Miyandoab, Deval Pandya, Laleh Seyyed-Kalantari, Faiza Khan Khattak
In this paper, we explore the use of soft-prompt tuning on sentiment classification task to quantify the biases of large language models (LLMs) such as Open Pre-trained Transformers (OPT) and Galactica language model.