1 code implementation • 1 May 2024 • Catarina G Belém, Preethi Seshadri, Yasaman Razeghi, Sameer Singh
A key observation in prior work is that models reinforce stereotypes as a consequence of the gendered correlations that are present in the training data.
1 code implementation • 1 Aug 2023 • Preethi Seshadri, Sameer Singh, Yanai Elazar
Bias amplification is a phenomenon in which models exacerbate biases or stereotypes present in the training data.
no code implementations • 9 Oct 2022 • Preethi Seshadri, Pouya Pezeshkpour, Sameer Singh
Recently, there has been an increase in efforts to understand how large language models (LLMs) propagate and amplify social biases.
1 code implementation • 16 May 2019 • Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, John Whaley
In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri
In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.