no code implementations • EMNLP (BlackboxNLP) 2021 • Ayush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa
We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues.
no code implementations • 26 Dec 2023 • Varun Nathan, Ayush Kumar, Digvijay Ingle, Jithendra Vepa
Additionally, we compare the performance of OOB-LLMs and CC-LLMs on the widely used SentEval dataset, and assess their capabilities in terms of surface, syntactic, and semantic information through probing tasks.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 21 Jun 2022 • Ayush Kumar, Rishabh Kumar Tripathi, Jithendra Vepa
In Weak Supervised Learning (WSL), a model is trained over noisy labels obtained from semantic rules and task-specific pre-trained models.
1 code implementation • 14 Dec 2021 • Ayush Kumar, Vijit Malik, Jithendra Vepa
Our method achieves state-of-the-art results on 1-shot and 5-shot intent detection task with gains ranging from 2-8\% points in F1 score on four benchmark datasets.
no code implementations • 19 Sep 2021 • Ayush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa
We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues.
1 code implementation • 1 Feb 2021 • Mukuntha Narayanan Sundararaman, Ayush Kumar, Jithendra Vepa
In this work, we propose a BERT-style language model, referred to as PhonemeBERT, that learns a joint language model with phoneme sequence and ASR transcript to learn phonetic-aware representations that are robust to ASR errors.
no code implementations • 21 Feb 2020 • Ayush Kumar, Jithendra Vepa
Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs.
Ranked #4 on Multimodal Sentiment Analysis on MOSI