1 code implementation • 23 Sep 2020 • Prashnna Kumar Gyawali, Sandesh Ghimire, Linwei Wang
On three benchmark data sets and one real-world biomedical data set, we demonstrate that this combined regularization results in improved generalization performance of SSL when learning from a small amount of labeled data.
no code implementations • 18 Jul 2020 • Xiajun Jiang, Sandesh Ghimire, Jwala Dhamala, Zhiyuan Li, Prashnna Kumar Gyawali, Linwei Wang
However, many reconstruction problems involve imaging physics that are dependent on the underlying non-Euclidean geometry.
1 code implementation • 22 May 2020 • Prashnna Kumar Gyawali, Sandesh Ghimire, Pradeep Bajracharya, Zhiyuan Li, Linwei Wang
In this work, we argue that regularizing the global smoothness of neural functions by filling the void in between data points can further improve SSL.
1 code implementation • ICLR 2020 • Zhiyuan Li, Jaideep Vitthal Murkute, Prashnna Kumar Gyawali, Linwei Wang
By drawing on the respective advantage of hierarchical representation learning and progressive learning, this is to our knowledge the first attempt to improve disentanglement by progressively growing the capacity of VAE to learn hierarchical representations.
1 code implementation • 3 Sep 2019 • Prashnna Kumar Gyawali, Zhiyuan Li, Cameron Knight, Sandesh Ghimire, B. Milan Horacek, John Sapp, Linwei Wang
We note that the independence within and the complexity of the latent density are two different properties we constrain when regularizing the posterior density: while the former promotes the disentangling ability of VAE, the latter -- if overly limited -- creates an unnecessary competition with the data reconstruction objective in VAE.
1 code implementation • 22 Jul 2019 • Prashnna Kumar Gyawali, Zhiyuan Li, Sandesh Ghimire, Linwei Wang
In this work, we hypothesize -- from the generalization perspective -- that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space.
no code implementations • 12 May 2019 • Sandesh Ghimire, Jwala Dhamala, Prashnna Kumar Gyawali, John L. Sapp, B. Milan Horacek, Linwei Wang
We introduce a novel model-constrained inference framework that replaces conventional physiological models with a deep generative model trained to generate TMP sequences from low-dimensional generative factors.
1 code implementation • 5 Mar 2019 • Sandesh Ghimire, Prashnna Kumar Gyawali, Jwala Dhamala, John L. Sapp, Milan Horacek, Linwei Wang
Deep learning networks have shown state-of-the-art performance in many image reconstruction problems.
no code implementations • 12 Oct 2018 • Sandesh Ghimire, Prashnna Kumar Gyawali, John L. Sapp, Milan Horacek, Linwei Wang
The results demonstrate that the generalization ability of an inverse reconstruction network can be improved by constrained stochasticity combined with global aggregation of temporal information in the latent space.