1 code implementation • NAACL 2022 • Neha Srikanth, Rachel Rudinger
When strong partial-input baselines reveal artifacts in crowdsourced NLI datasets, the performance of full-input models trained on such datasets is often dismissed as reliance on spurious correlations.
no code implementations • 17 Apr 2024 • Neha Srikanth, Marine Carpuat, Rachel Rudinger
We propose a metric for evaluating the paraphrastic consistency of natural language reasoning models based on the probability of a model achieving the same correctness on two paraphrases of the same problem.
no code implementations • 16 Nov 2023 • Neha Srikanth, Rupak Sarkar, Heran Mane, Elizabeth M. Aparicio, Quynh C. Nguyen, Rachel Rudinger, Jordan Boyd-Graber
Questions posed by information-seeking users often contain implicit false or potentially harmful assumptions.
1 code implementation • 24 May 2022 • Neha Srikanth, Rachel Rudinger
When strong partial-input baselines reveal artifacts in crowdsourced NLI datasets, the performance of full-input models trained on such datasets is often dismissed as reliance on spurious correlations.
1 code implementation • Findings (ACL) 2021 • Neha Srikanth, Junyi Jessy Li
Much of modern-day text simplification research focuses on sentence-level simplification, transforming original, more complex sentences into simplified versions.