no code implementations • 12 Apr 2024 • Vamshi Krishna Kancharla, Neelam Sinha
Our experiments demonstrate that utilizing defiltered images significantly improves mean average precision compared to training object detection models on distorted images.
no code implementations • 18 Mar 2024 • Debanjali Bhattacharya, Neelam Sinha
In image complexity-specific VBN classification, XGBoost yields average accuracy in the range of 86. 5% to 91. 5% for positively correlated VBN, which is 2% greater than that using negative correlation.
no code implementations • 5 Feb 2024 • Ammu R., Debanjali Bhattacharya, Ameiy Acharya, Ninad Aithal, Neelam Sinha
The proposed approach is employed for classification of a cohort of 50 healthy control (HC) and 50 Mild Cognitive Impairment (MCI), sourced from ADNI dataset.
no code implementations • 4 Dec 2023 • Ameiy Acharya, Chakka Sai Pradeep, Neelam Sinha
After computing the NSS of each ROI in both healthy and MCI subjects, we quantify the score disparity to identify nodes most impacted by MCI.
1 code implementation • 30 Nov 2023 • Ninad Aithal, Chakka Sai Pradeep, Neelam Sinha
Utilizing resting state fMRI time series imaging, we can study the underlying dynamics at ear-marked Regions of Interest (ROIs) to understand structure or lack thereof.
no code implementations • 3 Nov 2023 • Debanjali Bhattacharya, Neelam Sinha, Yashwanth R., Amit Chattopadhyay
To achieve this, 0- and 1-dimensional persistence diagrams are computed for each visual network representing COCO, ImageNet, and SUN.
1 code implementation • 27 Sep 2023 • Naveen Kanigiri, Manohar Suggula, Debanjali Bhattacharya, Neelam Sinha
The obtained result of this analysis has established a baseline in studying how differently human brain functions while looking into images of diverse complexities.
1 code implementation • 7 Sep 2023 • Vamshi K. Kancharala, Debanjali Bhattacharya, Neelam Sinha
Subsequently, parallel CNN model is employed that uses combined 2D features for classifying images across COCO, Imagenet and SUN.
1 code implementation • 15 Jul 2023 • Sai Pradeep Chakka, Sunil Kumar Vengalil, Neelam Sinha
Proposed algorithm is applied on astronomical data: 12 temporal-classes of timeseries of black hole GRS 1915+105, obtained from RXTE satellite with average length 25000.
no code implementations • 1 Jun 2023 • Vamshi Krishna Kancharla, Debanjali Bhattacharya, Neelam Sinha, Jitender Saini, Pramod Kumar Pal, Sandhya M
Structural MRI(S-MRI) is one of the most versatile imaging modality that revolutionized the anatomical study of brain in past decades.
1 code implementation • 23 Apr 2023 • Chakka Sai Pradeep, Neelam Sinha
An autoencoder is trained with a loss function to learn latent space (using both time- and frequency domains) representation, that is designed to be, time-invariant.
no code implementations • 8 Sep 2021 • Ammu R, Neelam Sinha
To address this, we propose a novel evaluation metric for segmentation performance, emphasizing smaller segments, by assigning higher weightage to smaller segment pixels.
no code implementations • 3 Sep 2021 • Yogesh Kochar, Sunil Kumar Vengalil, Neelam Sinha
Two independent contributions of this paper are 1) Novel activation function for faster training convergence 2) Systematic pruning of filters of models trained irrespective of activation function.
no code implementations • 1 Sep 2021 • Sunil Kumar Vengalil, Neelam Sinha
Deep neural networks have become the default choice for many applications like image and video recognition, segmentation and other image and video related tasks. However, a critical challenge with these models is the lack of explainability. This requirement of generating explainable predictions has motivated the research community to perform various analysis on trained models. In this study, we analyze the learned feature maps of trained models using MNIST images for achieving more explainable predictions. Our study is focused on deriving a set of primitive elements, here called visual concepts, that can be used to generate any arbitrary sample from the data generating distribution. We derive the primitive elements from the feature maps learned by the model. We illustrate the idea by generating visual concepts from a Variational Autoencoder trained using MNIST images. We augment the training data of MNIST dataset by adding about 60, 000 new images generated with visual concepts chosen at random. With this we were able to reduce the reconstruction loss (mean square error) from an initial value of 120 without augmentation to 60 with augmentation. Our approach is a first step towards the final goal of achieving trained deep neural network models whose predictions, features in hidden layers and the learned filters can be well explained. Such a model when deployed in production can easily be modified to adapt to new data, whereas existing deep learning models need a re training or fine tuning.
no code implementations • 19 Apr 2016 • Hariharan Ramasangu, Neelam Sinha
The novelty of the proposed method lies in utilizing the phase information in the transformed domain, for classifying between the cognitive tasks along with random sieve function chosen with a particular probability distribution.