no code implementations • NAACL (DADC) 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.
1 code implementation • 29 Oct 2023 • Anirudh Srinivasan, Venkata S Govindarajan, Kyle Mahowald
We use one such technique, AlterRep, a method of counterfactual probing, to explore the internal structure of multilingual models (mBERT and XLM-R).
1 code implementation • 26 Oct 2023 • Venkata S Govindarajan, Juan Diego Rodriguez, Kaj Bostrom, Kyle Mahowald
We pretrained our masked language models with three ingredients: an initial pretraining with music data, training on shorter sequences before training on longer ones, and masking specific tokens to target some of the BLiMP subtasks.
1 code implementation • 25 May 2023 • Venkata S Govindarajan, Kyle Mahowald, David I. Beaver, Junyi Jessy Li
While existing work on studying bias in NLP focues on negative or pejorative language use, Govindarajan et al. (2023) offer a revised framing of bias in terms of intergroup social context, and its effects on language behavior.
2 code implementations • 14 Sep 2022 • Venkata S Govindarajan, Katherine Atwell, Barea Sinno, Malihe Alikhani, David I. Beaver, Junyi Jessy Li
Current studies of bias in NLP rely mainly on identifying (unwanted or negative) bias towards a specific demographic group.
no code implementations • 29 Jun 2022 • Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability.