no code implementations • 25 Oct 2023 • Fahim Ahmed Zaman, Xiaodong Wu, Weiyu Xu, Milan Sonka, Raghuraman Mudumbai
We describe a method for verifying the output of a deep neural network for medical image segmentation that is robust to several classes of random as well as worst-case perturbations i. e. adversarial attacks.
no code implementations • 25 May 2019 • Jirong Yi, Hui Xie, Leixin Zhou, Xiaodong Wu, Weiyu Xu, Raghuraman Mudumbai
In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.
no code implementations • 5 Dec 2014 • Sampurna Biswas, Sunrita Poddar, Soura Dasgupta, Raghuraman Mudumbai, Mathews Jacob
We consider the recovery of a low rank and jointly sparse matrix from under sampled measurements of its columns.
no code implementations • 5 Dec 2014 • Sampurna Biswas, Sunrita Poddar, Soura Dasgupta, Raghuraman Mudumbai, Mathews Jacob
We introduce a two step algorithm with theoretical guarantees to recover a jointly sparse and low-rank matrix from undersampled measurements of its columns.