no code implementations • NAACL 2022 • Rajat Kumar, Mayur Patidar, Vaibhav Varshney, Lovekesh Vig, Gautam Shroff
However, even skilled domain experts are often unable to foresee all possible user intents at design time and for practical applications, novel intents may have to be inferred incrementally on-the-fly from user utterances.
Ranked #1 on Open Intent Discovery on BANKING77
no code implementations • ICON 2021 • Kunal Pagarey, Kanika Kalra, Abhay Garg, Saumajit Saha, Mayur Patidar, Shirish Karande
We explore the ability of pre-trained language models BART, an encoder-decoder model, GPT2 and GPT-Neo, both decoder-only models for generating sentences from structured MR tags as input.
no code implementations • EACL (AdaptNLP) 2021 • Surabhi Kumari, Nikhil Jaiswal, Mayur Patidar, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig
In comparison, in this work, we observe that a simpler filtering approach based on a domain classifier, applied only to the pseudo-training data can consistently perform better, providing performance gains of 1. 40, 1. 82 and 0. 76 in terms of BLEU score for Medical, Law and IT in one direction, and 1. 28, 1. 60 and 1. 60 in the other direction in low resource scenario over competitive baselines.
no code implementations • Findings (NAACL) 2022 • Vaibhav Varshney, Mayur Patidar, Rajat Kumar, Lovekesh Vig, Gautam Shroff
This typically entails repeated retraining of the intent detector on both the existing and novel intents which can be expensive and would require storage of all past data corresponding to prior intents.
no code implementations • AACL (WAT) 2020 • Nikhil Jaiswal, Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig
This is further followed by fine-tuning on the domain-specific corpus.
no code implementations • 15 Nov 2023 • Mayur Patidar, Riya Sawhney, Avinash Singh, Biswajit Chatterjee, Mausam, Indrajit Bhattacharya
Additional experiments in the in-domain setting show that FuSIC-KBQA also outperforms SoTA KBQA models when training data is limited.
no code implementations • 20 Dec 2022 • Mayur Patidar, Prayushi Faldu, Avinash Singh, Lovekesh Vig, Indrajit Bhattacharya, Mausam
When answering natural language questions over knowledge bases, missing facts, incomplete schema and limited scope naturally lead to many questions being unanswerable.
no code implementations • EACL 2021 • Saurabh Srivastava, Mayur Patidar, Sudip Chowdhury, Puneet Agarwal, Indrajit Bhattacharya, Gautam Shroff
Question answering (QA) over a knowledge graph (KG) is a task of answering a natural language (NL) query using the information stored in KG.
no code implementations • WS 2019 • Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Kar, Shirish e, Puneet Agarwal, Lovekesh Vig, Gautam Shroff
We observe that the proposed approach provides consistent gains in the performance of BERT for multiple benchmark datasets (e. g. 1. 0{\%} gain on MLDocs, and 1. 2{\%} gain on XNLI over translate-train with BERT), while requiring a single model for multiple languages.