1 code implementation • 2 Nov 2023 • Bhagyashree Puranik, Ahmad Beirami, Yao Qin, Upamanyu Madhow
State-of-the-art techniques for enhancing robustness of deep networks mostly rely on empirical risk minimization with suitable data augmentation.
1 code implementation • 26 Feb 2022 • Metehan Cekic, Can Bakiskan, Upamanyu Madhow
While end-to-end training of Deep Neural Networks (DNNs) yields state of the art performance in an increasing array of applications, it does not provide insight into, or control over, the features being extracted.
no code implementations • 7 Feb 2022 • Metehan Cekic, Ruirui Li, Zeya Chen, Yuguang Yang, Andreas Stolcke, Upamanyu Madhow
Speaker recognition, recognizing speaker identities based on voice alone, enables important downstream applications, such as personalization and authentication.
no code implementations • 4 Dec 2021 • Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani
We derive the worst-case attack for the GLRT defense, and show that its asymptotic performance (as the dimension of the data increases) approaches that of the minimax defense.
no code implementations • 2 Aug 2021 • Ahmet Dundar Sezer, Upamanyu Madhow
Line-of-sight (LoS) multi-input multi-output (MIMO) systems exhibit attractive scaling properties with increase in carrier frequency: for a fixed form factor and range, the spatial degrees of freedom increase quadratically for 2D arrays, in addition to the typically linear increase in available bandwidth.
1 code implementation • 12 Apr 2021 • Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow
Deep Neural Networks are known to be vulnerable to small, adversarially crafted, perturbations.
1 code implementation • 21 Nov 2020 • Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow
Our nominal design is to train the decoder and classifier together in standard supervised fashion, but we also consider unsupervised decoder training based on a regression objective (as in a conventional autoencoder) with separate supervised training of the classifier.
no code implementations • 16 Nov 2020 • Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani
We evaluate the GLRT approach for the special case of binary hypothesis testing in white Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations, a setting for which a minimax strategy optimizing for the worst-case attack is known.
no code implementations • 12 Jul 2020 • Anant Gupta, Ahmet Dundar Sezer, Upamanyu Madhow
We investigate the problem of localizing multiple targets using a single set of measurements from a network of radar sensors.
1 code implementation • 25 Feb 2020 • Metehan Cekic, Soorya Gopalakrishnan, Upamanyu Madhow
The opportunity for doing so arises due to subtle nonlinear variations across transmitters, even those made by the same manufacturer.
1 code implementation • 22 Feb 2020 • Can Bakiskan, Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity."
no code implementations • 25 Dec 2019 • Mohammed Abdelghany, Ali A. Farid, Upamanyu Madhow, Mark J. W. Rodwell
Millimeter wave MIMO combines the benefits of compact antenna arrays with a large number of elements and massive bandwidths, so that fully digital beamforming has the potential of supporting a large number of simultaneous users with {\it per user} data rates of multiple gigabits/sec (Gbps).
no code implementations • 19 May 2019 • Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow
A "wireless fingerprint" which exploits hardware imperfections unique to each device is a potentially powerful tool for wireless security.
1 code implementation • 24 Oct 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.
3 code implementations • 11 Mar 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani
It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).
no code implementations • 9 Mar 2018 • Zhinus Marzi, Joao Hespanha, Upamanyu Madhow
There is growing evidence regarding the importance of spike timing in neural information processing, with even a small number of spikes carrying information, but computational models lag significantly behind those for rate coding.
3 code implementations • 15 Jan 2018 • Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani
In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.
no code implementations • 14 Nov 2016 • Aseem Wadhwa, Upamanyu Madhow
The "fire together, wire together" Hebbian model is a central principle for learning in neuroscience, but surprisingly, it has found limited applicability in modern machine learning.
1 code implementation • NeurIPS 2015 • Dinesh Ramasamy, Upamanyu Madhow
Spectral embedding based on the Singular Value Decomposition (SVD) is a widely used "preprocessing" step in many learning tasks, typically leading to dimensionality reduction by projecting onto a number of dominant singular vectors and rescaling the coordinate axes (by a predefined function of the singular value).