Search Results for author: Harshitha Machiraju

Found 8 papers, 6 papers with code

Frequency-Based Vulnerability Analysis of Deep Learning Models against Image Corruptions

1 code implementation12 Jun 2023 Harshitha Machiraju, Michael H. Herzog, Pascal Frossard

In response, researchers have developed image corruption datasets to evaluate the performance of deep neural networks in handling such corruptions.

Classification Robust classification

CLAD: A Contrastive Learning based Approach for Background Debiasing

1 code implementation6 Oct 2022 Ke Wang, Harshitha Machiraju, Oh-Hyeon Choung, Michael Herzog, Pascal Frossard

Convolutional neural networks (CNNs) have achieved superhuman performance in multiple vision tasks, especially image classification.

Contrastive Learning Image Classification

A comment on Guo et al. [arXiv:2206.11228]

no code implementations2 Aug 2022 Ben Lonnqvist, Harshitha Machiraju, Michael H. Herzog

In a recent article, Guo et al. [arXiv:2206. 11228] report that adversarially trained neural representations in deep networks may already be as robust as corresponding primate IT neural representations.

Empirical Advocacy of Bio-inspired Models for Robust Image Recognition

1 code implementation18 May 2022 Harshitha Machiraju, Oh-Hyeon Choung, Michael H. Herzog, Pascal Frossard

There are continuous attempts to use features of the human visual system to improve the robustness of neural networks to data perturbations.

Data Augmentation

Bio-inspired Robustness: A Review

no code implementations16 Mar 2021 Harshitha Machiraju, Oh-Hyeon Choung, Pascal Frossard, Michael. H Herzog

Many studies have tried to add features of the human visual system to DCNNs to make them robust against adversarial attacks.

A Little Fog for a Large Turn

2 code implementations16 Jan 2020 Harshitha Machiraju, Vineeth N. Balasubramanian

Small, carefully crafted perturbations called adversarial perturbations can easily fool neural networks.

Adversarial Attack Autonomous Navigation +1

Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models

1 code implementation13 May 2019 Mayank Singh, Abhishek Sinha, Nupur Kumari, Harshitha Machiraju, Balaji Krishnamurthy, Vineeth N. Balasubramanian

We analyze the adversarially trained robust models to study their vulnerability against adversarial attacks at the level of the latent layers.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.