Search Results for author: Mayur Patidar

Found 9 papers, 0 papers with code

Intent Detection and Discovery from User Logs via Deep Semi-Supervised Contrastive Clustering

no code implementations NAACL 2022 Rajat Kumar, Mayur Patidar, Vaibhav Varshney, Lovekesh Vig, Gautam Shroff

However, even skilled domain experts are often unable to foresee all possible user intents at design time and for practical applications, novel intents may have to be inferred incrementally on-the-fly from user utterances.

Clustering Intent Detection +4

Stylistic MR-to-Text Generation Using Pre-trained Language Models

no code implementations ICON 2021 Kunal Pagarey, Kanika Kalra, Abhay Garg, Saumajit Saha, Mayur Patidar, Shirish Karande

We explore the ability of pre-trained language models BART, an encoder-decoder model, GPT2 and GPT-Neo, both decoder-only models for generating sentences from structured MR tags as input.

POS Sentence +1

Domain Adaptation for NMT via Filtered Iterative Back-Translation

no code implementations EACL (AdaptNLP) 2021 Surabhi Kumari, Nikhil Jaiswal, Mayur Patidar, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig

In comparison, in this work, we observe that a simpler filtering approach based on a domain classifier, applied only to the pseudo-training data can consistently perform better, providing performance gains of 1. 40, 1. 82 and 0. 76 in terms of BLEU score for Medical, Law and IT in one direction, and 1. 28, 1. 60 and 1. 60 in the other direction in low resource scenario over competitive baselines.

Domain Adaptation Machine Translation +2

Prompt Augmented Generative Replay via Supervised Contrastive Learning for Lifelong Intent Detection

no code implementations Findings (NAACL) 2022 Vaibhav Varshney, Mayur Patidar, Rajat Kumar, Lovekesh Vig, Gautam Shroff

This typically entails repeated retraining of the intent detector on both the existing and novel intents which can be expensive and would require storage of all past data corresponding to prior intents.

Continual Learning Contrastive Learning +2

Do I have the Knowledge to Answer? Investigating Answerability of Knowledge Base Questions

no code implementations20 Dec 2022 Mayur Patidar, Prayushi Faldu, Avinash Singh, Lovekesh Vig, Indrajit Bhattacharya, Mausam

When answering natural language questions over knowledge bases, missing facts, incomplete schema and limited scope naturally lead to many questions being unanswerable.

From Monolingual to Multilingual FAQ Assistant using Multilingual Co-training

no code implementations WS 2019 Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Kar, Shirish e, Puneet Agarwal, Lovekesh Vig, Gautam Shroff

We observe that the proposed approach provides consistent gains in the performance of BERT for multiple benchmark datasets (e. g. 1. 0{\%} gain on MLDocs, and 1. 2{\%} gain on XNLI over translate-train with BERT), while requiring a single model for multiple languages.

Cross-Lingual Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.