Search Results for author: Shashank Kotyan

Found 16 papers, 5 papers with code

k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis

1 code implementation7 Dec 2023 Shashank Kotyan, Ueda Tatsuya, Danilo Vasconcellos Vargas

While these methods effectively capture the overall sample distribution in the entire learned latent space, they tend to distort the structure of sample distributions within specific classes in the subset of the latent space.

Dimensionality Reduction

Synthetic Shifts to Initial Seed Vector Exposes the Brittle Nature of Latent-Based Diffusion Models

no code implementations24 Nov 2023 Mao Po-Yuan, Shashank Kotyan, Tham Yik Foong, Danilo Vasconcellos Vargas

To understand the impact of the initial seed vector on generated samples, we propose a reliability evaluation framework that evaluates the generated samples of a diffusion model when the initial seed vector is subjected to various synthetic shifts.

Image Generation

Towards Improving Robustness Against Common Corruptions using Mixture of Class Specific Experts

no code implementations16 Nov 2023 Shashank Kotyan, Danilo Vasconcellos Vargas

Through this contribution, the paper aims to foster a deeper understanding of neural network limitations and proposes a practical approach to enhance their resilience in the face of evolving and unpredictable conditions.

Data Augmentation

Towards Improving Robustness Against Common Corruptions in Object Detectors Using Adversarial Contrastive Learning

no code implementations14 Nov 2023 Shashank Kotyan, Danilo Vasconcellos Vargas

Neural networks have revolutionized various domains, exhibiting remarkable accuracy in tasks like natural language processing and computer vision.

Autonomous Driving Contrastive Learning

Improving Robustness for Vision Transformer with a Simple Dynamic Scanning Augmentation

no code implementations1 Nov 2023 Shashank Kotyan, Danilo Vasconcellos Vargas

In conclusion, this work contributes to the ongoing research on Vision Transformers by introducing Dynamic Scanning Augmentation as a technique for improving the accuracy and robustness of ViT.

A reading survey on adversarial machine learning: Adversarial attacks and their understanding

no code implementations7 Aug 2023 Shashank Kotyan

A particular branch of research, Adversarial Machine Learning, exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.

Deep neural network loses attention to adversarial images

no code implementations10 Jun 2021 Shashank Kotyan, Danilo Vasconcellos Vargas

We also analyse how different adversarial samples distort the attention of the neural network compared to original samples.

Image Classification

Representation Quality Explain Adversarial Attacks

no code implementations25 Sep 2019 Danilo Vasconcellos Vargas, Shashank Kotyan, Moe Matsuki

The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary).

Evolving Robust Neural Architectures to Defend from Adversarial Attacks

1 code implementation27 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas

By creating a novel neural architecture search with options for dense layers to connect with convolution layers and vice-versa as well as the addition of concatenation layers in the search, we were able to evolve an architecture that is inherently accurate on adversarial samples.

Neural Architecture Search

Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences

1 code implementation15 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Moe Matsuki

A crucial step to understanding the rationale for this lack of robustness is to assess the potential of the neural networks' representation to encode the existing features.

Clustering Zero-Shot Learning

Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks Are Necessary

1 code implementation14 Jun 2019 Shashank Kotyan, Danilo Vasconcellos Vargas

There exists a vast number of adversarial attacks and defences for machine learning algorithms of various types which makes assessing the robustness of algorithms a daunting task.

Adversarial Robustness Image Classification

Self Training Autonomous Driving Agent

no code implementations26 Apr 2019 Shashank Kotyan, Danilo Vasconcellos Vargas, Venkanna U

Intrinsically, driving is a Markov Decision Process which suits well the reinforcement learning paradigm.

Autonomous Driving reinforcement-learning +1

Drishtikon: An advanced navigational aid system for visually impaired people

no code implementations23 Apr 2019 Shashank Kotyan, Nishant Kumar, Pankaj Kumar Sahu, Venkanna Udutalapally

In this paper, we propose an aid system developed using object detection and depth perceivement to navigate a person without dashing into an object.

Navigate Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.