1 code implementation • 29 Jan 2023 • Jiangyuan Li, Thanh V. Nguyen, Chinmay Hegde, Raymond K. W. Wong
We study the implicit regularization of gradient descent towards structured sparsity via a novel neural reparameterization, which we call a diagonally grouped linear neural network.
1 code implementation • NeurIPS 2021 • Jiangyuan Li, Thanh V. Nguyen, Chinmay Hegde, Raymond K. W. Wong
In this paper, we study the implicit bias of gradient descent for sparse regression.
no code implementations • 25 Feb 2021 • Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde
Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.
no code implementations • 24 Aug 2020 • Raphaël Pestourie, Youssef Mroueh, Thanh V. Nguyen, Payel Das, Steven G. Johnson
Surrogate models for partial-differential equations are widely used in the design of meta-materials to rapidly evaluate the behavior of composable components.
no code implementations • ACL 2020 • Thanh V. Nguyen, Nikhil Rao, Karthik Subbian
Showing items that do not match search query intent degrades customer experience in e-commerce.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Thanh V. Nguyen, Youssef Mroueh, Samuel Hoffman, Payel Das, Pierre Dognin, Giuseppe Romano, Chinmay Hegde
We consider the problem of optimizing by sampling under multiple black-box constraints in nano-material design.
no code implementations • 27 Nov 2019 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde
Starting from a randomly initialized autoencoder network, we rigorously prove the linear convergence of gradient descent in two learning regimes, namely: (i) the weakly-trained regime where only the encoder is trained, and (ii) the jointly-trained regime where both the encoder and the decoder are trained.
no code implementations • 2 Jun 2018 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde
For each of these models, we prove that under suitable choices of hyperparameters, architectures, and initialization, autoencoders learned by gradient descent can successfully recover the parameters of the corresponding model.
no code implementations • ICML 2018 • Thanh V. Nguyen, Akshay Soni, Chinmay Hegde
Second, we propose an initialization algorithm that utilizes a small number of extra fully observed samples to produce such a coarse initial estimate.
no code implementations • 16 Nov 2017 • Aditya Balu, Thanh V. Nguyen, Apurva Kokate, Chinmay Hegde, Soumik Sarkar
We introduce a new, systematic framework for visualizing information flow in deep networks.
1 code implementation • 9 Nov 2017 • Thanh V. Nguyen, Raymond K. W. Wong, Chinmay Hegde
To our knowledge, our work introduces the first computationally efficient algorithm for double-sparse coding that enjoys rigorous statistical guarantees.