no code implementations • 12 Oct 2023 • Aakriti Agrawal, Rohith Aralikatti, Yanchao Sun, Furong Huang
This work is the first to formulate the generalised problem of robustness to multi-modal environment uncertainty in MARL.
no code implementations • 10 Dec 2022 • Rohith Aralikatti, Zhenyu Tang, Dinesh Manocha
We present a novel approach to improve the performance of learning-based speech dereverberation using accurate synthetic datasets.
no code implementations • 15 Nov 2022 • Rohith Aralikatti, Christoph Boeddeker, Gordon Wichern, Aswin Shanmugam Subramanian, Jonathan Le Roux
This paper proposes reverberation as supervision (RAS), a novel unsupervised loss function for single-channel reverberant speech separation.
no code implementations • 19 Jul 2021 • Rohith Aralikatti, Anton Ratnarajah, Zhenyu Tang, Dinesh Manocha
We present a novel approach that improves the performance of reverberant speech separation.
no code implementations • 29 Jan 2020 • Rohith Aralikatti, Sharad Roy, Abhinav Thanda, Dilip Kumar Margam, Pujitha Appan Kandala, Tanay Sharma, Shankar M Venkatesan
In this work, we propose novel methods to fuse information from audio and visual modalities at inference time.
no code implementations • 25 Jun 2019 • Dilip Kumar Margam, Rohith Aralikatti, Tanay Sharma, Abhinav Thanda, Pujitha A K, Sharad Roy, Shankar M Venkatesan
We also verify the method on a second dataset of $81$ speakers which we collected.
no code implementations • 12 Apr 2018 • Rohith Aralikatti, Dilip Margam, Tanay Sharma, Thanda Abhinav, Shankar M Venkatesan
This paper demonstrates two novel methods to estimate the global SNR of speech signals.