no code implementations • NIDCP (LREC) 2022 • John H.L. Hansen, Aditya Joglekar, Szu-Jui Chen, Meena Chandra Shekar, Chelzy Belitz
We aim to make this entire resource and supporting speech technology meta-data creation publicly available as a Community Resource for the development of speech and behavioral science.
no code implementations • 17 Jan 2024 • Iván López-Espejo, Aditya Joglekar, Antonio M. Peinado, Jesper Jensen
Pre-emphasis filtering, compensating for the natural energy decay of speech at higher frequencies, has been considered as a common pre-processing step in a number of speech processing tasks over the years.
1 code implementation • 17 May 2023 • Hongrui Chen, Aditya Joglekar, Levent Burak Kara
We employ the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network throughout the optimization.
1 code implementation • 6 May 2023 • Aditya Joglekar, Hongrui Chen, Levent Burak Kara
We show that using a suitable Fourier Features neural network architecture and hyperparameters, the density field approximation neural network can learn the weights to represent the optimal density field for the given domain and boundary conditions, by directly backpropagating the loss gradient through the displacement field approximation neural network, and unlike prior work there is no requirement of a sensitivity filter, optimality criterion method, or a separate training of density network in each topology optimization iteration.
no code implementations • 3 Nov 2022 • Aditya Joglekar, John H. L. Hansen
The Fearless Steps Challenge 2019 Phase-1 (FSC-P1) is the inaugural Challenge of the Fearless Steps Initiative hosted by the Center for Robust Speech Systems (CRSS) at the University of Texas at Dallas.
no code implementations • 4 Oct 2022 • Hongrui Chen, Aditya Joglekar, Kate S. Whitefoot, Levent Burak Kara
Through training, the network learns a material density and segment classification in the continuous 3D space.
no code implementations • 22 Sep 2020 • Kushagra Rastogi, Jonathan Lee, Fabrice Harel-Canada, Aditya Joglekar
This work extends the analysis of the theoretical results presented within the paper Is Q-Learning Provably Efficient?