Search Results for author: Omkar Dige

Found 2 papers, 0 papers with code

Interpretable Stereotype Identification through Reasoning

no code implementations24 Jul 2023 Jacob-Junqi Tian, Omkar Dige, David Emerson, Faiza Khan Khattak

Given that language models are trained on vast datasets that may contain inherent biases, there is a potential danger of inadvertently perpetuating systemic discrimination.

Fairness

Can Instruction Fine-Tuned Language Models Identify Social Bias through Prompting?

no code implementations19 Jul 2023 Omkar Dige, Jacob-Junqi Tian, David Emerson, Faiza Khan Khattak

As the breadth and depth of language model applications continue to expand rapidly, it is increasingly important to build efficient frameworks for measuring and mitigating the learned or inherited social biases of these models.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.