no code implementations • 24 Jul 2023 • Jacob-Junqi Tian, Omkar Dige, David Emerson, Faiza Khan Khattak
Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination.
no code implementations • 19 Jul 2023 • Omkar Dige, Jacob-Junqi Tian, David Emerson, Faiza Khan Khattak
As the breadth and depth of language model applications continue to expand rapidly, it is increasingly important to build efficient frameworks for measuring and mitigating the learned or inherited social biases of these models.