no code implementations • NAACL (TrustNLP) 2021 • Sawan Kumar, Kalpit Dixit, Kashif Shah
Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself.
1 code implementation • ACL 2022 • Sawan Kumar
We present ALC (Answer-Level Calibration), where our main suggestion is to model context-independent biases in terms of the probability of a choice without the associated context and to subsequently remove it using an unsupervised estimate of similarity with the full context.
no code implementations • 24 Apr 2024 • Sawan Kumar, Rajdip Nayek, Souvik Chakraborty
The study of neural operators has paved the way for the development of efficient approaches for solving partial differential equations (PDEs) compared with traditional methods.
1 code implementation • Findings (ACL) 2021 • Sawan Kumar, Partha Talukdar
Finally, we analyze the learned prompts to reveal novel insights, including the idea that two training examples in the right order alone can provide competitive performance for sentiment classification and natural language inference.
1 code implementation • ACL 2020 • Sawan Kumar, Partha Talukdar
In this work, we focus on the task of natural language inference (NLI) and address the following question: can we build NLI systems which produce labels with high accuracy, while also generating faithful explanations of its decisions?
no code implementations • IJCNLP 2019 • Sawan Kumar, Shweta Garg, Kartik Mehta, Nikhil Rasiwasia
In this paper, we establish the effectiveness of using hard negatives, coupled with a siamese network and a suitable loss function, for the tasks of answer selection and answer triggering.
1 code implementation • ACL 2019 • Sawan Kumar, Sharmistha Jat, Karan Saxena, Partha Talukdar
To overcome this challenge, we propose Extended WSD Incorporating Sense Embeddings (EWISE), a supervised model to perform WSD by predicting over a continuous sense embedding space as opposed to a discrete label space.