no code implementations • 9 Jan 2024 • Tanmay Garg, Deepika Vemuri, Vineeth N Balasubramanian
This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks.
1 code implementation • 16 Dec 2023 • Sandesh Kamath, Sankalp Mittal, Amit Deshpande, Vineeth N Balasubramanian
We observe two main causes for fragile attributions: first, the existing metrics of robustness (e. g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image.
no code implementations • 23 Oct 2023 • Aniket Vashishtha, Abbavaram Gowtham Reddy, Abhinav Kumar, Saketh Bachu, Vineeth N Balasubramanian, Amit Sharma
At the core of causal inference lies the challenge of determining reliable causal graphs solely based on observational data.
no code implementations • 26 Sep 2023 • Thrupthi Ann John, Vineeth N Balasubramanian, C. V. Jawahar
Although current deep models for face tasks surpass human performance on some benchmarks, we do not understand how they work.
no code implementations • ICCV 2023 • Vimal K B, Saketh Bachu, Tanmay Garg, Niveditha Lakshmi Narasimhan, Raghavan Konuru, Vineeth N Balasubramanian
Estimating the transferability of publicly available pretrained models to a target task has assumed an important place for transfer learning tasks in recent years.
no code implementations • 29 May 2023 • Abbavaram Gowtham Reddy, Saketh Bachu, Saloni Dash, Charchit Sharma, Amit Sharma, Vineeth N Balasubramanian
Counterfactual data augmentation has recently emerged as a method to mitigate confounding biases in the training data.
no code implementations • 26 Mar 2023 • Chaitanya Devaguptapu, Samarth Sinha, K J Joseph, Vineeth N Balasubramanian, Animesh Garg
Models pre-trained on large-scale datasets are often fine-tuned to support newer tasks and datasets that arrive over time.
1 code implementation • ICCV 2023 • Shubhra Aich, Jesus Ruiz-Santaquiteria, Zhenyu Lu, Prachi Garg, K J Joseph, Alvaro Fernandez Garcia, Vineeth N Balasubramanian, Kenrick Kin, Chengde Wan, Necati Cihan Camgoz, Shugao Ma, Fernando de la Torre
Our sampling scheme outperforms SOTA methods significantly on two 3D skeleton gesture datasets, the publicly available SHREC 2017, and EgoGesture3D -- which we extract from a publicly available RGBD dataset.
no code implementations • 9 Nov 2022 • Amlan Jyoti, Karthik Balaji Ganesh, Manoj Gayala, Nandita Lakshmi Tunuguntla, Sandesh Kamath, Vineeth N Balasubramanian
While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models.
no code implementations • 8 Nov 2022 • Abbavaram Gowtham Reddy, Vineeth N Balasubramanian
Methods based on potential outcomes framework solve this problem by exploiting inductive biases and heuristics from causal inference.
no code implementations • 22 Oct 2022 • Abbavaram Gowtham Reddy, Saloni Dash, Amit Sharma, Vineeth N Balasubramanian
Given a causal generative process, we formally characterize the adverse effects of confounding on any downstream tasks and show that the correlation between generative factors (attributes) can be used to quantitatively measure confounding between generative factors.
1 code implementation • 21 Oct 2022 • Surgan Jandial, Yash Khasbage, Arghya Pal, Vineeth N Balasubramanian, Balaji Krishnamurthy
The inadvertent stealing of private/sensitive information using Knowledge Distillation (KD) has been getting significant attention recently and has guided subsequent defense efforts considering its critical nature.
no code implementations • 10 Oct 2022 • Rebbapragada V C Sairam, Monish Keswani, Uttaran Sinha, Nishit Shah, Vineeth N Balasubramanian
In this paper, we denote size of an object as the number of pixels it covers in an image and size imbalance as the over-representation of certain sizes of objects in a dataset.
no code implementations • 21 Jul 2022 • K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian
Inspired by this, we identify and formulate a new, pragmatic problem setting of NCDwF: Novel Class Discovery without Forgetting, which tasks a machine learning model to incrementally discover novel categories of instances from unlabeled data, while maintaining its performance on the previously seen categories.
no code implementations • 13 Jun 2022 • Puneet Mangla, Shivam Chandhok, Milan Aggarwal, Vineeth N Balasubramanian, Balaji Krishnamurthy
To this end, we propose IntriNsic multimodality for DomaIn GeneralizatiOn (INDIGO), a simple and elegant way of leveraging the intrinsic modality present in these pre-trained multimodal networks along with the visual modality to enhance generalization to unseen domains at test-time.
1 code implementation • CVPR 2022 • Monish Keswani, Sriranjani Ramakrishnan, Nishant Reddy, Vineeth N Balasubramanian
With growing use cases of model reuse and distillation, there is a need to also study transfer of interpretability from one model to another.
1 code implementation • 22 Apr 2022 • K J Joseph, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian
Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data, by utilizing labeled instances from a disjoint set of classes.
1 code implementation • CVPR 2022 • Hari Chandana Kuchibhotla, Sumitra S Malagi, Shivam Chandhok, Vineeth N Balasubramanian
Secondly, we introduce a unified feature-generative framework for CGZSL that leverages bi-directional incremental alignment to dynamically adapt to addition of new classes, with or without labeled data, that arrive over time in any of these CGZSL settings.
2 code implementations • CVPR 2022 • K J Joseph, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, Vineeth N Balasubramanian
Deep learning models tend to forget their earlier knowledge while incrementally learning new tasks.
1 code implementation • WACV 2022 • Vaishnavi Khindkar, Chetan Arora, Vineeth N Balasubramanian, Anbumani Subramanian, C. V. Jawahar
Qualitative results demonstrate the ability of ILLUME to attend important object instances required for alignment.
2 code implementations • 10 Dec 2021 • Abbavaram Gowtham Reddy, Benin Godfrey L, Vineeth N Balasubramanian
Finally, we perform an empirical study on state of the art disentangled representation learners using our metrics and dataset to evaluate them from causal perspective.
no code implementations • NeurIPS 2021 • Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian
Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps.
no code implementations • 24 Nov 2021 • Sai Srinivas Kancheti, Abbavaram Gowtham Reddy, Vineeth N Balasubramanian, Amit Sharma
A trained neural network can be interpreted as a structural causal model (SCM) that provides the effect of changing input variables on the model's output.
1 code implementation • 30 Oct 2021 • Anindya Sarkar, Anirban Sarkar, Sowrya Gali, Vineeth N Balasubramanian
Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps.
1 code implementation • 23 Oct 2021 • Prachi Garg, Rohit Saluja, Vineeth N Balasubramanian, Chetan Arora, Anbumani Subramanian, C. V. Jawahar
Recent efforts in multi-domain learning for semantic segmentation attempt to learn multiple geographical datasets in a universal, joint model.
1 code implementation • CVPR 2022 • Anirban Sarkar, Deepak Vijaykeerthy, Anindya Sarkar, Vineeth N Balasubramanian
To the best of our knowledge, we are the first ante-hoc explanation generation method to show results with a large-scale dataset such as ImageNet.
no code implementations • 15 Jul 2021 • Puneet Mangla, Shivam Chandhok, Vineeth N Balasubramanian, Fahad Shahbaz Khan
Recent progress towards designing models that can generalize to unseen domains (i. e domain generalization) or unseen classes (i. e zero-shot learning) has embarked interest towards building models that can tackle both domain-shift and semantic shift simultaneously (i. e zero-shot domain generalization).
no code implementations • 12 Jul 2021 • Shivam Chandhok, Sanath Narayan, Hisham Cholakkal, Rao Muhammad Anwer, Vineeth N Balasubramanian, Fahad Shahbaz Khan, Ling Shao
The need to address the scarcity of task-specific annotated data has resulted in concerted efforts in recent years for specific settings such as zero-shot learning (ZSL) and domain generalization (DG), to separately address the issues of semantic shift and domain shift, respectively.
no code implementations • 11 Jul 2021 • Gaurav Bhatt, Shivam Chandhok, Vineeth N Balasubramanian
In this work, we present a practical setting of inductive zero and few-shot learning, where unlabeled images from other out-of-data classes, that do not belong to seen or unseen categories, can be used to improve generalization in any-shot learning.
1 code implementation • 4 May 2021 • Thrupthi Ann John, Vineeth N Balasubramanian, C V Jawahar
As Deep Neural Network models for face processing tasks approach human-like performance, their deployment in critical applications such as law enforcement and access control has seen an upswing, where any failure may have far-reaching consequences.
1 code implementation • 26 Apr 2021 • Pranoy Panda, Sai Srinivas Kancheti, Vineeth N Balasubramanian
We formulate a causal extension to the recently introduced paradigm of instance-wise feature selection to explain black-box visual classifiers.
1 code implementation • 19 Apr 2021 • Piyushi Manupriya, Tarun Ram Menta, J. Saketha Nath, Vineeth N Balasubramanian
This work explores the novel idea of learning a submodular scoring function to improve the specificity/selectivity of existing feature attribution methods.
2 code implementations • CVPR 2021 • K J Joseph, Salman Khan, Fahad Shahbaz Khan, Vineeth N Balasubramanian
Humans have a natural instinct to identify unknown object instances in their environments.
1 code implementation • 28 Dec 2020 • Anindya Sarkar, Anirban Sarkar, Vineeth N Balasubramanian
Deep neural networks are the default choice of learning models for computer vision tasks.
no code implementations • 8 Dec 2020 • Puneet Mangla, Nupur Kumari, Mayank Singh, Balaji Krishnamurthy, Vineeth N Balasubramanian
Previous works have addressed training in low data setting by leveraging transfer learning and data augmentation techniques.
no code implementations • 7 Dec 2020 • Adepu Ravi Sankar, Yash Khasbage, Rahul Vigneswaran, Vineeth N Balasubramanian
In this work, we propose a layerwise loss landscape analysis where the loss surface at every layer is studied independently and also on how each correlates to the overall loss surface.
1 code implementation • 30 Nov 2020 • Akshay L Chandra, Sai Vikas Desai, Chaitanya Devaguptapu, Vineeth N Balasubramanian
While recent studies have focused on evaluating the robustness of various query functions in AL, little to no attention has been given to the design of the initial labeled pool for deep active learning.
no code implementations • 24 Oct 2020 • Radhika Dua, Sai Srinivas Kancheti, Vineeth N Balasubramanian
To take this a step forward, we introduce a new task: ViQAR (Visual Question Answering and Reasoning), wherein a model must generate the complete answer and a rationale that seeks to justify the generated answer.
no code implementations • 17 Sep 2020 • Saloni Dash, Vineeth N Balasubramanian, Amit Sharma
We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI), that generates counterfactuals in accordance with the causal relationships between attributes of an image.
1 code implementation • 16 Jul 2020 • Chaitanya Devaguptapu, Devansh Agarwal, Gaurav Mittal, Pulkit Gopalani, Vineeth N Balasubramanian
We show that NAS, which is popular for achieving SoTA accuracy, can provide adversarial accuracy as a free add-on without any form of adversarial training.
1 code implementation • 18 May 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks \cite{Dezfooli17, Khrulkov18} for reasonable norms of perturbation.
2 code implementations • 17 Mar 2020 • K J Joseph, Jathushan Rajasegaran, Salman Khan, Fahad Shahbaz Khan, Vineeth N Balasubramanian
In a real-world setting, object instances from new classes can be continuously encountered by object detectors.
1 code implementation • NeurIPS 2021 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e. g., translations, rotations) entreats both theoretical and empirical understanding.