Search Results for author: Prashnna Kumar Gyawali

Found 9 papers, 6 papers with code

Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization

1 code implementation23 Sep 2020 Prashnna Kumar Gyawali, Sandesh Ghimire, Linwei Wang

On three benchmark data sets and one real-world biomedical data set, we demonstrate that this combined regularization results in improved generalization performance of SSL when learning from a small amount of labeled data.

Learning Geometry-Dependent and Physics-Based Inverse Image Reconstruction

no code implementations18 Jul 2020 Xiajun Jiang, Sandesh Ghimire, Jwala Dhamala, Zhiyuan Li, Prashnna Kumar Gyawali, Linwei Wang

However, many reconstruction problems involve imaging physics that are dependent on the underlying non-Euclidean geometry.

Image Reconstruction

Semi-supervised Medical Image Classification with Global Latent Mixing

1 code implementation22 May 2020 Prashnna Kumar Gyawali, Sandesh Ghimire, Pradeep Bajracharya, Zhiyuan Li, Linwei Wang

In this work, we argue that regularizing the global smoothness of neural functions by filling the void in between data points can further improve SSL.

General Classification Image Classification +1

Progressive Learning and Disentanglement of Hierarchical Representations

1 code implementation ICLR 2020 Zhiyuan Li, Jaideep Vitthal Murkute, Prashnna Kumar Gyawali, Linwei Wang

By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.

Disentanglement

Improving Disentangled Representation Learning with the Beta Bernoulli Process

1 code implementation3 Sep 2019 Prashnna Kumar Gyawali, Zhiyuan Li, Cameron Knight, Sandesh Ghimire, B. Milan Horacek, John Sapp, Linwei Wang

We note that the independence within and the complexity of the latent density are two different properties we constrain when regularizing the posterior density: while the former promotes the disentangling ability of VAE, the latter -- if overly limited -- creates an unnecessary competition with the data reconstruction objective in VAE.

Decision Making Representation Learning

Semi-Supervised Learning by Disentangling and Self-Ensembling Over Stochastic Latent Space

1 code implementation22 Jul 2019 Prashnna Kumar Gyawali, Zhiyuan Li, Sandesh Ghimire, Linwei Wang

In this work, we hypothesize -- from the generalization perspective -- that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space.

Data Augmentation Multi-Label Classification +1

Generative Modeling and Inverse Imaging of Cardiac Transmembrane Potential

no code implementations12 May 2019 Sandesh Ghimire, Jwala Dhamala, Prashnna Kumar Gyawali, John L. Sapp, B. Milan Horacek, Linwei Wang

We introduce a novel model-constrained inference framework that replaces conventional physiological models with a deep generative model trained to generate TMP sequences from low-dimensional generative factors.

Improving Generalization of Sequence Encoder-Decoder Networks for Inverse Imaging of Cardiac Transmembrane Potential

no code implementations12 Oct 2018 Sandesh Ghimire, Prashnna Kumar Gyawali, John L. Sapp, Milan Horacek, Linwei Wang

The results demonstrate that the generalization ability of an inverse reconstruction network can be improved by constrained stochasticity combined with global aggregation of temporal information in the latent space.

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.