no code implementations • 28 Nov 2023 • Jacob R. Epifano, Stephen Glass, Ravi P. Ramachandran, Sharad Patel, Aaron J. Masino, Ghulam Rasool
This study investigated the performance, explainability, and robustness of deployed artificial intelligence (AI) models in predicting mortality during the COVID-19 pandemic and beyond.
no code implementations • 22 Jun 2023 • Ian E. Nielsen, Erik Grundeland, Joseph Snedeker, Ghulam Rasool, Ravi P. Ramachandran
Feature visualization is used to visualize learned features for black box machine learning models.
1 code implementation • 22 Mar 2023 • Jacob R. Epifano, Ravi P. Ramachandran, Aaron J. Masino, Ghulam Rasool
In the last few years, many works have tried to explain the predictions of deep learning models.
no code implementations • 15 Mar 2023 • Ian E. Nielsen, Ravi P. Ramachandran, Nidhal Bouaynaya, Hassan M. Fathallah-Shaykh, Ghulam Rasool
The expansion of explainable artificial intelligence as a field of research has generated numerous methods of visualizing and understanding the black box of a machine learning model.
no code implementations • 11 Mar 2023 • Asim Waqas, Aakash Tripathi, Ravi P. Ramachandran, Paul Stewart, Ghulam Rasool
Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning.
no code implementations • 28 Apr 2022 • Sabeen Ahmed, Ian E. Nielsen, Aakash Tripathi, Shamoon Siddiqui, Ghulam Rasool, Ravi P. Ramachandran
Transformer architecture has widespread applications, particularly in Natural Language Processing and computer vision.
no code implementations • 23 Jul 2021 • Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, Ravi P. Ramachandran
Later, we discuss how gradient-based methods can be evaluated for their robustness and the role that adversarial robustness plays in having meaningful explanations.