Search Results for author: Prashnna K Gyawali

Found 6 papers, 1 papers with code

Ensembling improves stability and power of feature selection for deep learning models

no code implementations2 Oct 2022 Prashnna K Gyawali, Xiaoxia Liu, James Zou, Zihuai He

Despite extensive recent efforts to define different feature importance metrics for deep learning models, we identified that inherent stochasticity in the design and training of deep learning models makes commonly used feature importance scores unstable.

Feature Importance feature selection

Improving genetic risk prediction across diverse population by disentangling ancestry representations

no code implementations10 May 2022 Prashnna K Gyawali, Yann Le Guen, Xiaoxia Liu, Hua Tang, James Zou, Zihuai He

This can lead to biases in the risk predictors resulting in poor generalization when applied to minority populations and admixed individuals such as African Americans.

Genetic Risk Prediction

Analysis of Discriminator in RKHS Function Space for Kullback-Leibler Divergence Estimation

no code implementations25 Feb 2020 Sandesh Ghimire, Prashnna K Gyawali, Linwei Wang

Based on this theory, we then present a scalable way to control the complexity of the discriminator for a reliable estimation of KL divergence.

Generative Adversarial Network

Wavelets to the Rescue: Improving Sample Quality of Latent Variable Deep Generative Models

no code implementations26 Oct 2019 Prashnna K Gyawali, Rudra Shah, Linwei Wang, VSR Veeravasarapu, Maneesh Singh

Variational Autoencoders (VAE) are probabilistic deep generative models underpinned by elegant theory, stable training processes, and meaningful manifold representations.

Deep Generative Model with Beta Bernoulli Process for Modeling and Learning Confounding Factors

no code implementations31 Oct 2018 Prashnna K Gyawali, Cameron Knight, Sandesh Ghimire, B. Milan Horacek, John L. Sapp, Linwei Wang

While deep representation learning has become increasingly capable of separating task-relevant representations from other confounding factors in the data, two significant challenges remain.

Representation Learning

Learning disentangled representation from 12-lead electrograms: application in localizing the origin of Ventricular Tachycardia

1 code implementation4 Aug 2018 Prashnna K Gyawali, B. Milan Horacek, John L. Sapp, Linwei Wang

In this work, we present a conditional variational autoencoder (VAE) to extract the subject-specific adjustment to the ECG data, conditioned on task-specific representations learned from a deterministic encoder.

Cannot find the paper you are looking for? You can Submit a new open access paper.