Search Results for author: Preethi Seshadri

Found 4 papers, 2 papers with code

The Bias Amplification Paradox in Text-to-Image Generation

1 code implementation1 Aug 2023 Preethi Seshadri, Sameer Singh, Yanai Elazar

Bias amplification is a phenomenon in which models exacerbate biases or stereotypes present in the training data.

Text-to-Image Generation

Quantifying Social Biases Using Templates is Unreliable

no code implementations9 Oct 2022 Preethi Seshadri, Pouya Pezeshkpour, Sameer Singh

Recently, there has been an increase in efforts to understand how large language models (LLMs) propagate and amplify social biases.

Attribute Benchmarking +1

Fonts-2-Handwriting: A Seed-Augment-Train framework for universal digit classification

1 code implementation16 May 2019 Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, John Whaley

In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.

General Classification Transfer Learning

A Seed-Augment-Train Framework for Universal Digit Classification

no code implementations ICLR Workshop DeepGenStruct 2019 Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri

In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that contains a synthetic seed image dataset generation procedure for languages with different numeral systems using freely available open font file datasets.

Classification Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.