1 code implementation • 23 Apr 2024 • Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, Chitta Baral
Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic.
1 code implementation • 17 May 2023 • Himanshu Gupta, Saurabh Arjun Sawant, Swaroop Mishra, Mutsumi Nakamura, Arindam Mitra, Santosh Mashetty, Chitta Baral
In the MTL setting, an instruction tuned model trained on only 6% of downstream training data achieve SOTA, while using 100% of the training data results in a 3. 69% points improvement (ROUGE-L 74. 68) over the previous SOTA.
1 code implementation • 16 Sep 2021 • Himanshu Gupta, Shreyas Verma, Santosh Mashetty, Swaroop Mishra
In this paper, we introduce CONTEXT-NER, a task that aims to generate the relevant context for entities in a sentence, where the context is a phrase describing the entity but not necessarily present in the sentence.
Ranked #1 on ContextNER on EDGAR10-Q Dataset