no code implementations • 12 Jan 2024 • Akshita Jha, Vinodkumar Prabhakaran, Remi Denton, Sarah Laszlo, Shachi Dave, Rida Qadri, Chandan K. Reddy, Sunipa Dev
First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia.
1 code implementation • 19 May 2023 • Akshita Jha, Aida Davani, Chandan K. Reddy, Shachi Dave, Vinodkumar Prabhakaran, Sunipa Dev
Stereotype benchmark datasets are crucial to detect and mitigate social stereotypes about groups of people in NLP models.
no code implementations • 7 Feb 2023 • Akshita Jha, Adithya Samavedhi, Vineeth Rakesh, Jaideep Chandrashekar, Chandan K. Reddy
Firstly, the performance gain provided by transformer-based models comes at a steep cost - both in terms of the required training time and the resource (memory and energy) consumption.
2 code implementations • 31 May 2022 • Akshita Jha, Chandan K. Reddy
Pre-trained programming language (PL) models (such as CodeT5, CodeBERT, GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving code understanding and code generation.
1 code implementation • 20 Aug 2021 • Akshita Jha, Vineeth Rakesh, Jaideep Chandrashekar, Adithya Samavedhi, Chandan K. Reddy
When handling such long documents, there are three primary challenges: (i) the presence of different contexts for the same word throughout the document, (ii) small sections of contextually similar text between two documents, but dissimilar text in the remaining parts (this defies the basic understanding of "similarity"), and (iii) the coarse nature of a single global similarity measure which fails to capture the heterogeneity of the document content.
no code implementations • 31 Jul 2021 • Akshita Jha, Bhanukiran Vinzamuri, Chandan K. Reddy
In this paper, we propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
1 code implementation • WS 2017 • Akshita Jha, Radhika Mamidi
Our work helps analyze and understand the much prevalent ambivalent sexism in social media.