no code implementations • ICLR 2019 • Amit Deshpande, Sandesh Kamath, K V Subrahmanyam
Neural networks models are known to be vulnerable to geometric transformations as well as small pixel-wise perturbations of input.
1 code implementation • 16 Dec 2023 • Sandesh Kamath, Sankalp Mittal, Amit Deshpande, Vineeth N Balasubramanian
We observe two main causes for fragile attributions: first, the existing metrics of robustness (e. g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image.
no code implementations • 9 Nov 2022 • Amlan Jyoti, Karthik Balaji Ganesh, Manoj Gayala, Nandita Lakshmi Tunuguntla, Sandesh Kamath, Vineeth N Balasubramanian
While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models.
no code implementations • 20 Jun 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam
We observe that networks trained with constant learning rate to batch size ratio, as proposed in Jastrzebski et al., yield models which generalize well and also have almost constant adversarial robustness, independent of the batch size.
no code implementations • 8 Jun 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam
Recent work by authors arXiv:2002. 11318 studies a trade-off between invariance and robustness to adversarial attacks.
1 code implementation • 18 May 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks \cite{Dezfooli17, Khrulkov18} for reasonable norms of perturbation.
1 code implementation • NeurIPS 2021 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e. g., translations, rotations) entreats both theoretical and empirical understanding.
no code implementations • 25 Sep 2019 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam
We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST.
no code implementations • 25 Sep 2019 • Amit Deshpande, Sandesh Kamath, K V Subrahmanyam
We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks.
no code implementations • 28 May 2019 • Amit Despande, Sandesh Kamath, K V Subrahmanyam
An effective method to obtain an adversarial robust network is to train the network with adversarially perturbed samples.
no code implementations • 17 May 2019 • Sandesh Kamath, Amit Despande, K V Subrahmanyam
Large-batch training is known to incur poor generalization by Jastrzebski et al. (2017) as well as poor adversarial robustness by Yao et al. (2018b).
no code implementations • 27 Sep 2018 • Amit Deshpande, Sandesh Kamath, K V Subrahmanyam
In this paper, we observe an interesting spectral property shared by all of the above input-dependent, pixel-wise adversarial attacks on translation and rotation-equivariant networks.