Search Results for author: Jithendra Vepa

Found 7 papers, 2 papers with code

What BERT Based Language Model Learns in Spoken Transcripts: An Empirical Study

no code implementations EMNLP (BlackboxNLP) 2021 Ayush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa

We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues.

Language Modelling Spoken Language Understanding

Towards Probing Contact Center Large Language Models

no code implementations26 Dec 2023 Varun Nathan, Ayush Kumar, Digvijay Ingle, Jithendra Vepa

Additionally, we compare the performance of OOB-LLMs and CC-LLMs on the widely used SentEval dataset, and assess their capabilities in terms of surface, syntactic, and semantic information through probing tasks.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Low Resource Pipeline for Spoken Language Understanding via Weak Supervision

no code implementations21 Jun 2022 Ayush Kumar, Rishabh Kumar Tripathi, Jithendra Vepa

In Weak Supervised Learning (WSL), a model is trained over noisy labels obtained from semantic rules and task-specific pre-trained models.

Emotion Classification Few-Shot Learning +3

Exploring the Limits of Natural Language Inference Based Setup for Few-Shot Intent Detection

1 code implementation14 Dec 2021 Ayush Kumar, Vijit Malik, Jithendra Vepa

Our method achieves state-of-the-art results on 1-shot and 5-shot intent detection task with gains ranging from 2-8\% points in F1 score on four benchmark datasets.

Few-Shot Learning Generalized Few-Shot Learning +5

What BERT Based Language Models Learn in Spoken Transcripts: An Empirical Study

no code implementations19 Sep 2021 Ayush Kumar, Mukuntha Narayanan Sundararaman, Jithendra Vepa

We probe BERT based language models (BERT, RoBERTa) trained on spoken transcripts to investigate its ability to understand multifarious properties in absence of any speech cues.

Spoken Language Understanding

Phoneme-BERT: Joint Language Modelling of Phoneme Sequence and ASR Transcript

1 code implementation1 Feb 2021 Mukuntha Narayanan Sundararaman, Ayush Kumar, Jithendra Vepa

In this work, we propose a BERT-style language model, referred to as PhonemeBERT, that learns a joint language model with phoneme sequence and ASR transcript to learn phonetic-aware representations that are robust to ASR errors.

intent-classification Intent Classification +1

Gated Mechanism for Attention Based Multimodal Sentiment Analysis

no code implementations21 Feb 2020 Ayush Kumar, Jithendra Vepa

Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs.

Multimodal Sentiment Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.