no code implementations • 2 Oct 2022 • Prashnna K Gyawali, Xiaoxia Liu, James Zou, Zihuai He
Despite extensive recent efforts to define different feature importance metrics for deep learning models, we identified that inherent stochasticity in the design and training of deep learning models makes commonly used feature importance scores unstable.
no code implementations • 10 May 2022 • Prashnna K Gyawali, Yann Le Guen, Xiaoxia Liu, Hua Tang, James Zou, Zihuai He
This can lead to biases in the risk predictors resulting in poor generalization when applied to minority populations and admixed individuals such as African Americans.
no code implementations • 25 Feb 2020 • Sandesh Ghimire, Prashnna K Gyawali, Linwei Wang
Based on this theory, we then present a scalable way to control the complexity of the discriminator for a reliable estimation of KL divergence.
no code implementations • 26 Oct 2019 • Prashnna K Gyawali, Rudra Shah, Linwei Wang, VSR Veeravasarapu, Maneesh Singh
Variational Autoencoders (VAE) are probabilistic deep generative models underpinned by elegant theory, stable training processes, and meaningful manifold representations.
no code implementations • 31 Oct 2018 • Prashnna K Gyawali, Cameron Knight, Sandesh Ghimire, B. Milan Horacek, John L. Sapp, Linwei Wang
While deep representation learning has become increasingly capable of separating task-relevant representations from other confounding factors in the data, two significant challenges remain.
1 code implementation • 4 Aug 2018 • Prashnna K Gyawali, B. Milan Horacek, John L. Sapp, Linwei Wang
In this work, we present a conditional variational autoencoder (VAE) to extract the subject-specific adjustment to the ECG data, conditioned on task-specific representations learned from a deterministic encoder.