Search Results for author: K V Subrahmanyam

Found 10 papers, 2 papers with code

Robustness and Equivariance of Neural Networks

no code implementations ICLR 2019 Amit Deshpande, Sandesh Kamath, K V Subrahmanyam

Neural networks models are known to be vulnerable to geometric transformations as well as small pixel-wise perturbations of input.

Translation

How do SGD hyperparameters in natural training affect adversarial robustness?

no code implementations20 Jun 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam

We observe that networks trained with constant learning rate to batch size ratio, as proposed in Jastrzebski et al., yield models which generalize well and also have almost constant adversarial robustness, independent of the batch size.

Adversarial Robustness

On Universalized Adversarial and Invariant Perturbations

no code implementations8 Jun 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam

Recent work by authors arXiv:2002. 11318 studies a trade-off between invariance and robustness to adversarial attacks.

Translation

Universalization of any adversarial attack using very few test examples

1 code implementation18 May 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian

For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks \cite{Dezfooli17, Khrulkov18} for reasonable norms of perturbation.

Adversarial Attack

Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks

1 code implementation NeurIPS 2021 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian

(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e. g., translations, rotations) entreats both theoretical and empirical understanding.

Adversarial Robustness

Invariance vs Robustness of Neural Networks

no code implementations25 Sep 2019 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam

We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST.

Adversarial Robustness Image Classification

Universal Adversarial Attack Using Very Few Test Examples

no code implementations25 Sep 2019 Amit Deshpande, Sandesh Kamath, K V Subrahmanyam

We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks.

Adversarial Attack

Better Generalization with Adaptive Adversarial Training

no code implementations28 May 2019 Amit Despande, Sandesh Kamath, K V Subrahmanyam

An effective method to obtain an adversarial robust network is to train the network with adversarially perturbed samples.

Adversarial Robustness

On Adversarial Robustness of Small vs Large Batch Training

no code implementations17 May 2019 Sandesh Kamath, Amit Despande, K V Subrahmanyam

Large-batch training is known to incur poor generalization by Jastrzebski et al. (2017) as well as poor adversarial robustness by Yao et al. (2018b).

Adversarial Robustness

Universal Attacks on Equivariant Networks

no code implementations27 Sep 2018 Amit Deshpande, Sandesh Kamath, K V Subrahmanyam

In this paper, we observe an interesting spectral property shared by all of the above input-dependent, pixel-wise adversarial attacks on translation and rotation-equivariant networks.

Adversarial Attack Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.