Search Results for author: Vineeth N Balasubramanian

Found 43 papers, 22 papers with code

Advancing Ante-Hoc Explainable Models through Generative Adversarial Networks

no code implementations9 Jan 2024 Tanmay Garg, Deepika Vemuri, Vineeth N Balasubramanian

This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks.

Explainable Models

Rethinking Robustness of Model Attributions

1 code implementation16 Dec 2023 Sandesh Kamath, Sankalp Mittal, Amit Deshpande, Vineeth N Balasubramanian

We observe two main causes for fragile attributions: first, the existing metrics of robustness (e. g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image.

Causal Inference Using LLM-Guided Discovery

no code implementations23 Oct 2023 Aniket Vashishtha, Abbavaram Gowtham Reddy, Abhinav Kumar, Saketh Bachu, Vineeth N Balasubramanian, Amit Sharma

At the core of causal inference lies the challenge of determining reliable causal graphs solely based on observational data.

Causal Discovery Causal Inference

Explaining Deep Face Algorithms through Visualization: A Survey

no code implementations26 Sep 2023 Thrupthi Ann John, Vineeth N Balasubramanian, C. V. Jawahar

Although current deep models for face tasks surpass human performance on some benchmarks, we do not understand how they work.

On Counterfactual Data Augmentation Under Confounding

no code implementations29 May 2023 Abbavaram Gowtham Reddy, Saketh Bachu, Saloni Dash, Charchit Sharma, Amit Sharma, Vineeth N Balasubramanian

Counterfactual data augmentation has recently emerged as a method to mitigate confounding biases in the training data.

counterfactual Data Augmentation

Data-Free Class-Incremental Hand Gesture Recognition

1 code implementation ICCV 2023 Shubhra Aich, Jesus Ruiz-Santaquiteria, Zhenyu Lu, Prachi Garg, K J Joseph, Alvaro Fernandez Garcia, Vineeth N Balasubramanian, Kenrick Kin, Chengde Wan, Necati Cihan Camgoz, Shugao Ma, Fernando de la Torre

Our sampling scheme outperforms SOTA methods significantly on two 3D skeleton gesture datasets, the publicly available SHREC 2017, and EgoGesture3D -- which we extract from a publicly available RGBD dataset.

Class Incremental Learning Hand Gesture Recognition +3

On the Robustness of Explanations of Deep Neural Network Models: A Survey

no code implementations9 Nov 2022 Amlan Jyoti, Karthik Balaji Ganesh, Manoj Gayala, Nandita Lakshmi Tunuguntla, Sandesh Kamath, Vineeth N Balasubramanian

While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models.

NESTER: An Adaptive Neurosymbolic Method for Causal Effect Estimation

no code implementations8 Nov 2022 Abbavaram Gowtham Reddy, Vineeth N Balasubramanian

Methods based on potential outcomes framework solve this problem by exploiting inductive biases and heuristics from causal inference.

Causal Inference Program Synthesis

Counterfactual Generation Under Confounding

no code implementations22 Oct 2022 Abbavaram Gowtham Reddy, Saloni Dash, Amit Sharma, Vineeth N Balasubramanian

Given a causal generative process, we formally characterize the adverse effects of confounding on any downstream tasks and show that the correlation between generative factors (attributes) can be used to quantitatively measure confounding between generative factors.

Attribute counterfactual

Distilling the Undistillable: Learning from a Nasty Teacher

1 code implementation21 Oct 2022 Surgan Jandial, Yash Khasbage, Arghya Pal, Vineeth N Balasubramanian, Balaji Krishnamurthy

The inadvertent stealing of private/sensitive information using Knowledge Distillation (KD) has been getting significant attention recently and has guided subsequent defense efforts considering its critical nature.

Knowledge Distillation

ARUBA: An Architecture-Agnostic Balanced Loss for Aerial Object Detection

no code implementations10 Oct 2022 Rebbapragada V C Sairam, Monish Keswani, Uttaran Sinha, Nishit Shah, Vineeth N Balasubramanian

In this paper, we denote size of an object as the number of pixels it covers in an image and size imbalance as the over-representation of certain sizes of objects in a dataset.

Object object-detection +1

Novel Class Discovery without Forgetting

no code implementations21 Jul 2022 K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian

Inspired by this, we identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting, which tasks a machine learning model to incrementally discover novel categories of instances from unlabeled data, while maintaining its performance on the previously seen categories.

Novel Class Discovery

INDIGO: Intrinsic Multimodality for Domain Generalization

no code implementations13 Jun 2022 Puneet Mangla, Shivam Chandhok, Milan Aggarwal, Vineeth N Balasubramanian, Balaji Krishnamurthy

To this end, we propose IntriNsic multimodality for DomaIn GeneralizatiOn (INDIGO), a simple and elegant way of leveraging the intrinsic modality present in these pre-trained multimodal networks along with the visual modality to enhance generalization to unseen domains at test-time.

Domain Generalization

Proto2Proto: Can you recognize the car, the way I do?

1 code implementation CVPR 2022 Monish Keswani, Sriranjani Ramakrishnan, Nishant Reddy, Vineeth N Balasubramanian

With growing use cases of model reuse and distillation, there is a need to also study transfer of interpretability from one model to another.

Knowledge Distillation

Spacing Loss for Discovering Novel Categories

1 code implementation22 Apr 2022 K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian

Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data, by utilizing labeled instances from a disjoint set of classes.

Novel Class Discovery

Unseen Classes at a Later Time? No Problem

1 code implementation CVPR 2022 Hari Chandana Kuchibhotla, Sumitra S Malagi, Shivam Chandhok, Vineeth N Balasubramanian

Secondly, we introduce a unified feature-generative framework for CGZSL that leverages bi-directional incremental alignment to dynamically adapt to addition of new classes, with or without labeled data, that arrive over time in any of these CGZSL settings.

Generalized Zero-Shot Learning

On Causally Disentangled Representations

2 code implementations10 Dec 2021 Abbavaram Gowtham Reddy, Benin Godfrey L, Vineeth N Balasubramanian

Finally, we perform an empirical study on state of the art disentangled representation learners using our metrics and dataset to evaluate them from causal perspective.

Disentanglement Fairness

Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach

no code implementations NeurIPS 2021 Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian

Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps.

Adversarial Robustness

Matching Learned Causal Effects of Neural Networks with Domain Priors

no code implementations24 Nov 2021 Sai Srinivas Kancheti, Abbavaram Gowtham Reddy, Vineeth N Balasubramanian, Amit Sharma

A trained neural network can be interpreted as a structural causal model (SCM) that provides the effect of changing input variables on the model's output.

Fairness

Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach

1 code implementation30 Oct 2021 Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian

Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps.

Adversarial Robustness

Multi-Domain Incremental Learning for Semantic Segmentation

1 code implementation23 Oct 2021 Prachi Garg, Rohit Saluja, Vineeth N Balasubramanian, Chetan Arora, Anbumani Subramanian, C. V. Jawahar

Recent efforts in multi-domain learning for semantic segmentation attempt to learn multiple geographical datasets in a universal, joint model.

Incremental Learning Scene Segmentation +1

A Framework for Learning Ante-hoc Explainable Models via Concepts

1 code implementation CVPR 2022 Anirban Sarkar, Deepak Vijaykeerthy, Anindya Sarkar, Vineeth N Balasubramanian

To the best of our knowledge, we are the first ante-hoc explanation generation method to show results with a large-scale dataset such as ImageNet.

Explainable Models Explanation Generation

Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen Domains

no code implementations15 Jul 2021 Puneet Mangla, Shivam Chandhok, Vineeth N Balasubramanian, Fahad Shahbaz Khan

Recent progress towards designing models that can generalize to unseen domains (i. e domain generalization) or unseen classes (i. e zero-shot learning) has embarked interest towards building models that can tackle both domain-shift and semantic shift simultaneously (i. e zero-shot domain generalization).

Domain Generalization Zero-Shot Learning +1

Structured Latent Embeddings for Recognizing Unseen Classes in Unseen Domains

no code implementations12 Jul 2021 Shivam Chandhok, Sanath Narayan, Hisham Cholakkal, Rao Muhammad Anwer, Vineeth N Balasubramanian, Fahad Shahbaz Khan, Ling Shao

The need to address the scarcity of task-specific annotated data has resulted in concerted efforts in recent years for specific settings such as zero-shot learning (ZSL) and domain generalization (DG), to separately address the issues of semantic shift and domain shift, respectively.

Domain Generalization Zero-Shot Learning +1

Learn from Anywhere: Rethinking Generalized Zero-Shot Learning with Limited Supervision

no code implementations11 Jul 2021 Gaurav Bhatt, Shivam Chandhok, Vineeth N Balasubramanian

In this work, we present a practical setting of inductive zero and few-shot learning, where unlabeled images from other out-of-data classes, that do not belong to seen or unseen categories, can be used to improve generalization in any-shot learning.

Few-Shot Learning Generalized Zero-Shot Learning

Canonical Saliency Maps: Decoding Deep Face Models

1 code implementation4 May 2021 Thrupthi Ann John, Vineeth N Balasubramanian, C V Jawahar

As Deep Neural Network models for face processing tasks approach human-like performance, their deployment in critical applications such as law enforcement and access control has seen an upswing, where any failure may have far-reaching consequences.

Face Model Object Recognition

Instance-wise Causal Feature Selection for Model Interpretation

1 code implementation26 Apr 2021 Pranoy Panda, Sai Srinivas Kancheti, Vineeth N Balasubramanian

We formulate a causal extension to the recently introduced paradigm of instance-wise feature selection to explain black-box visual classifiers.

feature selection

Improving Attribution Methods by Learning Submodular Functions

1 code implementation19 Apr 2021 Piyushi Manupriya, Tarun Ram Menta, J. Saketha Nath, Vineeth N Balasubramanian

This work explores the novel idea of learning a submodular scoring function to improve the specificity/selectivity of existing feature attribution methods.

Specificity

Enhanced Regularizers for Attributional Robustness

1 code implementation28 Dec 2020 Anindya Sarkar, Anirban Sarkar, Vineeth N Balasubramanian

Deep neural networks are the default choice of learning models for computer vision tasks.

Data InStance Prior (DISP) in Generative Adversarial Networks

no code implementations8 Dec 2020 Puneet Mangla, Nupur Kumari, Mayank Singh, Balaji Krishnamurthy, Vineeth N Balasubramanian

Previous works have addressed training in low data setting by leveraging transfer learning and data augmentation techniques.

Data Augmentation Image Generation +2

A Deeper Look at the Hessian Eigenspectrum of Deep Neural Networks and its Applications to Regularization

no code implementations7 Dec 2020 Adepu Ravi Sankar, Yash Khasbage, Rahul Vigneswaran, Vineeth N Balasubramanian

In this work, we propose a layerwise loss landscape analysis where the loss surface at every layer is studied independently and also on how each correlates to the overall loss surface.

On Initial Pools for Deep Active Learning

1 code implementation30 Nov 2020 Akshay L Chandra, Sai Vikas Desai, Chaitanya Devaguptapu, Vineeth N Balasubramanian

While recent studies have focused on evaluating the robustness of various query functions in AL, little to no attention has been given to the design of the initial labeled pool for deep active learning.

Active Learning

Beyond VQA: Generating Multi-word Answer and Rationale to Visual Questions

no code implementations24 Oct 2020 Radhika Dua, Sai Srinivas Kancheti, Vineeth N Balasubramanian

To take this a step forward, we introduce a new task: ViQAR (Visual Question Answering and Reasoning), wherein a model must generate the complete answer and a rationale that seeks to justify the generated answer.

General Classification Multiple-choice +2

Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals

no code implementations17 Sep 2020 Saloni Dash, Vineeth N Balasubramanian, Amit Sharma

We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI), that generates counterfactuals in accordance with the causal relationships between attributes of an image.

BIG-bench Machine Learning counterfactual +2

On Adversarial Robustness: A Neural Architecture Search perspective

1 code implementation16 Jul 2020 Chaitanya Devaguptapu, Devansh Agarwal, Gaurav Mittal, Pulkit Gopalani, Vineeth N Balasubramanian

We show that NAS, which is popular for achieving SoTA accuracy, can provide adversarial accuracy as a free add-on without any form of adversarial training.

Adversarial Robustness Neural Architecture Search

Universalization of any adversarial attack using very few test examples

1 code implementation18 May 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian

For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks \cite{Dezfooli17, Khrulkov18} for reasonable norms of perturbation.

Adversarial Attack

Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks

1 code implementation NeurIPS 2021 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian

(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e. g., translations, rotations) entreats both theoretical and empirical understanding.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.