Search Results for author: Santosh Mashetty

Found 3 papers, 3 papers with code

Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models

1 code implementation23 Apr 2024 Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, Chitta Baral

Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic.

Logical Reasoning Question Answering

Instruction Tuned Models are Quick Learners

1 code implementation17 May 2023 Himanshu Gupta, Saurabh Arjun Sawant, Swaroop Mishra, Mutsumi Nakamura, Arindam Mitra, Santosh Mashetty, Chitta Baral

In the MTL setting, an instruction tuned model trained on only 6% of downstream training data achieve SOTA, while using 100% of the training data results in a 3. 69% points improvement (ROUGE-L 74. 68) over the previous SOTA.

In-Context Learning Multi-Task Learning +1

Context-NER : Contextual Phrase Generation at Scale

1 code implementation16 Sep 2021 Himanshu Gupta, Shreyas Verma, Santosh Mashetty, Swaroop Mishra

In this paper, we introduce CONTEXT-NER, a task that aims to generate the relevant context for entities in a sentence, where the context is a phrase describing the entity but not necessarily present in the sentence.

ContextNER Language Modelling +8

Cannot find the paper you are looking for? You can Submit a new open access paper.