no code implementations • 24 Oct 2023 • Abijith Jagannath Kamath, Chandra Sekhar Seelamantula
We develop a kernel-based sampling approach, which allows for perfect reconstruction with a sample complexity equal to the rate of innovation of the signal.
no code implementations • 20 Jul 2023 • Kartheek Kumar Reddy Nareddy, Abijith Jagannath Kamath, Chandra Sekhar Seelamantula
We consider the analysis-sparse l1-minimization problem with a generalized l2-norm-based data-fidelity and show that it effectively corresponds to using a tight-frame sensing matrix.
no code implementations • 8 Jun 2023 • Abijith Jagannath Kamath, Chandra Sekhar Seelamantula
We present an iterative technique for perfect reconstruction subject to the events satisfying a density criterion.
no code implementations • 2 Jun 2023 • Siddarth Asokan, Nishanth Shetty, Aadithya Srikanth, Chandra Sekhar Seelamantula
Generative adversarial networks (GANs) comprise a generator, trained to learn the underlying distribution of the desired data, and a discriminator, trained to distinguish real samples from those output by the generator.
no code implementations • 1 Jun 2023 • Siddarth Asokan, Chandra Sekhar Seelamantula
We show analytically, via the least-squares (LSGAN) and Wasserstein (WGAN) GAN variants, that the discriminator optimization problem is one of interpolation in $n$-dimensions.
2 code implementations • CVPR 2023 • Siddarth Asokan, Chandra Sekhar Seelamantula
We demonstrate the efficacy of the Spider approach on DCGAN, conditional GAN, PGGAN, StyleGAN2 and StyleGAN3.
no code implementations • 23 Jul 2021 • Dhruv Jawali, Abhishek Kumar, Chandra Sekhar Seelamantula
Wavelets have proven to be highly successful in several signal and image processing applications.
no code implementations • 7 Jul 2021 • Abijith Jagannath Kamath, Sunil Rudresh, Chandra Sekhar Seelamantula
Time-encoding of continuous-time signals is an alternative sampling paradigm to conventional methods such as Shannon's sampling.
no code implementations • 25 May 2021 • Vinayak Killedar, Praveen Kumar Pokala, Chandra Sekhar Seelamantula
We also consider the effect of the dimension of the latent space and the sparsity factor in validating the SDLSS framework.
no code implementations • 13 May 2021 • Kartheek Kumar Reddy Nareddy, Mani Madhoolika Bulusu, Praveen Kumar Pokala, Chandra Sekhar Seelamantula
We also consider quantization of the network weights.
no code implementations • 1 May 2021 • Swapnil Mache, Praveen Kumar Pokala, Kusala Rajendran, Chandra Sekhar Seelamantula
We solve the problem of sparse signal deconvolution in the context of seismic reflectivity inversion, which pertains to high-resolution recovery of the subsurface reflection coefficients.
no code implementations • 10 Apr 2021 • Swapnil Mache, Praveen Kumar Pokala, Kusala Rajendran, Chandra Sekhar Seelamantula
The network is referred to as deep-unfolded reflectivity inversion network (DuRIN).
no code implementations • 13 Dec 2020 • Aniketh Manjunath, Subramanya Jois, Chandra Sekhar Seelamantula
Further, we perform two-stage glaucoma severity grading using the cup-to-disc ratio (CDR) computed based on the obtained OD/OC segmentation.
1 code implementation • NeurIPS 2020 • Siddarth Asokan, Chandra Sekhar Seelamantula
Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution.
no code implementations • 2 Oct 2018 • Subhadip Mukherjee, Chandra Sekhar Seelamantula
A comparison with the state-of-the- art algorithms shows that the proposed algorithm has a higher reconstruction accuracy and is about 2 to 3 dB away from the CRB.
2 code implementations • 19 Jan 2018 • Sunil Rudresh, Aditya Vasisht, Karthika Vijayan, Chandra Sekhar Seelamantula
Time- and pitch-scale modifications of speech signals find important applications in speech synthesis, playback systems, voice conversion, learning/hearing aids, etc..
no code implementations • 29 Jun 2017 • Subhadip Mukherjee, Deepak R., Huaijin Chen, Ashok Veeraraghavan, Chandra Sekhar Seelamantula
The proposed online algorithm is useful in a setting where one seeks to design a progressive decoding strategy to reconstruct a sparse signal from linear measurements so that one does not have to wait until all measurements are acquired.
no code implementations • 20 May 2017 • Debabrata Mahapatra, Subhadip Mukherjee, Chandra Sekhar Seelamantula
We address the problem of reconstructing sparse signals from noisy and compressive measurements using a feed-forward deep neural network (DNN) with an architecture motivated by the iterative shrinkage-thresholding algorithm (ISTA).
no code implementations • 6 Dec 2014 • Sagar Venkatesh Gubbi, Chandra Sekhar Seelamantula
For the case of additive white Gaussian noise contamination, the risk estimation procedure relies on Stein's lemma.
no code implementations • 27 Oct 2014 • Manasij Venkatesh, Chandra Sekhar Seelamantula
We propose a bilateral filter with a locally controlled domain kernel for directional edge-preserving smoothing.
no code implementations • 26 Aug 2014 • Subhadip Mukherjee, Rupam Basu, Chandra Sekhar Seelamantula
We develop a dictionary learning algorithm by minimizing the $\ell_1$ distortion metric on the data term, which is known to be robust for non-Gaussian noise contamination.
no code implementations • 19 Mar 2014 • Subhadip Mukherjee, Chandra Sekhar Seelamantula
We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy operating on the entire data at a time.
no code implementations • 3 Dec 2013 • Jayanth Krishna Mogali, Adithya Kumar Pediredla, Chandra Sekhar Seelamantula
We show that the contrast energy functional is optimal under certain conditions.