no code implementations • 13 Feb 2023 • Sudhanshu Srivastava, Ishika Gupta, Anusha Prakash, Jom Kuriakose, Hema A. Murthy
Hidden-Markov-model (HMM) based text-to-speech (HTS) offers flexibility in speaking styles along with fast training and synthesis while being computationally less intense.
no code implementations • 22 Dec 2022 • Ishika Gupta, Anusha Prakash, Jom Kuriakose, Hema A. Murthy
This paper proposes an approach to build a high-quality text-to-speech (TTS) system for technical domains using data augmentation.
no code implementations • 4 Mar 2021 • Nauman Dawalatabad, Jilt Sebastian, Jom Kuriakose, C. Chandra Sekhar, Shrikanth Narayanan, Hema A. Murthy
In this work, we address the problem of separating the percussive voices in the taniavartanam segments of Carnatic music.
no code implementations • 14 Nov 2020 • Vinay Kumar Verma, Ashish Mishra, Anubha Pandey, Hema A. Murthy, Piyush Rai
We present a meta-learning based generative model for zero-shot learning (ZSL) towards a challenging setting when the number of training examples from each \emph{seen} class is very few.
no code implementations • 27 Jul 2020 • Mari Ganesh Kumar, Shrikanth Narayanan, Mriganka Sur, Hema A. Murthy
These high dimensional statistics are then projected to a lower dimensional space where the biometric information is preserved.
no code implementations • 18 Jan 2020 • Anubha Pandey, Ashish Mishra, Vinay Kumar Verma, Anurag Mittal, Hema A. Murthy
Conventional approaches to Sketch-Based Image Retrieval (SBIR) assume that the data of all the classes are available during training.
1 code implementation • 16 Apr 2019 • Mari Ganesh Kumar, Suvidha Rupesh Kumar, Saranya M, B. Bharathi, Hema A. Murthy
When combined with the decision-level feature switching (DLFS) paradigm, the best TD-SNN system outperforms the best baseline GMM system on evaluation data with a relative improvement of 48. 03\% and 49. 47\% for both logical and physical access, respectively.
no code implementations • 3 Sep 2017 • Ashish Mishra, M Shiva Krishna Reddy, Anurag Mittal, Hema A. Murthy
By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.