no code implementations • 28 Apr 2023 • Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt
We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners
no code implementations • 8 Oct 2021 • Ameya Joshi, Gauri Jagatap, Chinmay Hegde
Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.
no code implementations • 25 Feb 2021 • Thanh V. Nguyen, Gauri Jagatap, Chinmay Hegde
Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution.
no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Gauri Jagatap, Chinmay Hegde
Untrained deep neural networks as image priors have been recently introduced for linear inverse imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over hand-crafted image priors such as sparsity.
2 code implementations • NeurIPS 2019 • Gauri Jagatap, Chinmay Hegde
Specifically, we consider the problem of solving linear inverse problems, such as compressive sensing, as well as non-linear problems, such as compressive phase retrieval.
no code implementations • 20 Jun 2018 • Gauri Jagatap, Chinmay Hegde
We propose and analyze a new family of algorithms for training neural networks with ReLU activations.
no code implementations • NeurIPS 2017 • Gauri Jagatap, Chinmay Hegde
For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n).
1 code implementation • 18 May 2017 • Gauri Jagatap, Chinmay Hegde
For this problem, we design a recovery algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log n)$.