Search Results for author: Pragyan Banerjee

Found 1 papers, 0 papers with code

All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation

no code implementations9 Nov 2023 Pragyan Banerjee, Abhinav Java, Surgan Jandial, Simra Shahid, Shaz Furniturewala, Balaji Krishnamurthy, Sumit Bhatia

Fairness in Language Models (LMs) remains a longstanding challenge, given the inherent biases in training data that can be perpetuated by models and affect the downstream tasks.

Fairness Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.