1 code implementation • 2 Feb 2024 • Sohan Patnaik, Heril Changwal, Milan Aggarwal, Sumit Bhatia, Yaman Kumar, Balaji Krishnamurthy
Typically, only a small part of the whole table is relevant to derive the answer for a given question.
Ranked #1 on Semantic Parsing on WikiSQL (Denotation accuracy (test) metric)
no code implementations • 14 Jul 2023 • Shivani Kumar, Sumit Bhatia, Milan Aggarwal, Tanmoy Chakraborty
To this end, we propose UNIT, a UNified dIalogue dataseT constructed from conversations of existing datasets for different dialogue tasks capturing the nuances for each of them.
no code implementations • 11 May 2023 • H S V N S Kowndinya Renduchintala, KrishnaTeja Killamsetty, Sumit Bhatia, Milan Aggarwal, Ganesh Ramakrishnan, Rishabh Iyer, Balaji Krishnamurthy
A salient characteristic of pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size.
no code implementations • 12 Sep 2022 • Abhinav Java, Shripad Deshmukh, Milan Aggarwal, Surgan Jandial, Mausoom Sarkar, Balaji Krishnamurthy
MONOMER fuses context from visual, textual, and spatial modalities of snippets and documents to find query snippet in target documents.
1 code implementation • 20 Aug 2022 • Yaman Kumar Singla, Rajat Jha, Arunim Gupta, Milan Aggarwal, Aditya Garg, Tushar Malyan, Ayush Bhardwaj, Rajiv Ratn Shah, Balaji Krishnamurthy, Changyou Chen
Motivated by persuasion literature in social psychology and marketing, we introduce an extensive vocabulary of persuasion strategies and build the first ad image corpus annotated with persuasion strategies.
1 code implementation • Findings (NAACL) 2022 • Jivat Neet Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy
Large transformer-based pre-trained language models have achieved impressive performance on a variety of knowledge-intensive tasks and can capture factual knowledge in their parameters.
no code implementations • 13 Jun 2022 • Puneet Mangla, Shivam Chandhok, Milan Aggarwal, Vineeth N Balasubramanian, Balaji Krishnamurthy
To this end, we propose IntriNsic multimodality for DomaIn GeneralizatiOn (INDIGO), a simple and elegant way of leveraging the intrinsic modality present in these pre-trained multimodal networks along with the visual modality to enhance generalization to unseen domains at test-time.
no code implementations • NAACL 2022 • Rachit Bansal, Milan Aggarwal, Sumit Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy
To train CoSe-Co, we propose a novel dataset comprising of sentence and commonsense knowledge pairs.
no code implementations • AKBC Workshop CSKB 2021 • Rachit Bansal, Milan Aggarwal, Sumit Bhatia, Jivat Neet Kaur, Balaji Krishnamurthy
Pre-trained Language Models (PTLMs) have been shown to perform well on natural language reasoning tasks requiring commonsense.
no code implementations • AKBC Workshop CSKB 2021 • Jivat Neet Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy
This allows the training of the language model to be de-coupled from the external knowledge source and the latter can be updated without affecting the parameters of the language model.
1 code implementation • EMNLP 2020 • Milan Aggarwal, Hiresh Gupta, Mausoom Sarkar, Balaji Krishnamurthy
To mitigate this, we propose Form2Seq, a novel sequence-to-sequence (Seq2Seq) inspired framework for structure extraction using text, with a specific focus on forms, which leverages relative spatial arrangement of structures.
1 code implementation • 9 Jul 2021 • Milan Aggarwal, Mausoom Sarkar, Hiresh Gupta, Balaji Krishnamurthy
Experimental results show the effectiveness of our approach achieving a recall of 90. 29%, 73. 80%, 83. 12%, and 52. 72% for the above structures, respectively, outperforming semantic segmentation baselines significantly.
no code implementations • ACL 2021 • Madhur Panwar, Shashank Shailabh, Milan Aggarwal, Balaji Krishnamurthy
Topic models have been widely used to learn text representations and gain insight into document corpora.
1 code implementation • 6 Oct 2020 • Sumegh Roychowdhury, Sumedh A. Sontakke, Nikaash Puri, Mausoom Sarkar, Milan Aggarwal, Pinkesh Badjatiya, Balaji Krishnamurthy, Laurent Itti
Also, they are believed to be arranged hierarchically, allowing for an efficient representation of complex long-horizon experiences.
no code implementations • ECCV 2020 • Mausoom Sarkar, Milan Aggarwal, Arneh Jain, Hiresh Gupta, Balaji Krishnamurthy
We introduce our new human-annotated forms dataset and show that our method significantly outperforms different segmentation baselines on this dataset in extracting hierarchical structures.
no code implementations • 11 Nov 2018 • Milan Aggarwal, Nupur Kumari, Ayush Bansal, Balaji Krishnamurthy
Generating paraphrases, that is, different variations of a sentence conveying the same meaning, is an important yet challenging task in NLP.
no code implementations • ICLR 2018 • Milan Aggarwal, Aarushi Arora, Shagun Sodhani, Balaji Krishnamurthy
We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent.