no code implementations • 6 May 2024 • Neo Christopher Chung, Hongkyou Chung, Hearim Lee, Hongbeom Chung, Lennart Brocki, George Dyer
"Explainability" has multiple meanings which are often used interchangeably, and there are an even greater number of XAI methods - none of which presents a clear edge.
1 code implementation • 4 Dec 2023 • Lennart Brocki, Neo Christopher Chung
We introduce class-discriminative attention maps (CDAM), a novel post-hoc explanation method that is highly sensitive to the target class.
no code implementations • 23 Nov 2023 • Neo Christopher Chung, George Dyer, Lennart Brocki
The global mental health crisis is looming with a rapid increase in mental disorders, limited resources, and the social stigma of seeking treatment.
no code implementations • 20 Mar 2023 • Lennart Brocki, Neo Christopher Chung
Therefore, we study and propose the integration of expert-derived radiomics and DNN-predicted biomarkers in interpretable classifiers which we call ConRad, for computerized tomography (CT) scans of lung cancer.
1 code implementation • 2 Mar 2023 • Lennart Brocki, Neo Christopher Chung
However, the change in the prediction outcome may stem from perturbation artifacts, since perturbed samples in the test dataset are out of distribution (OOD) compared to the training dataset and can therefore potentially disturb the model in an unexpected manner.
no code implementations • 23 Jan 2023 • Lennart Brocki, George C. Dyer, Anna Gładka, Neo Christopher Chung
The system consists of a core generative model and post-processing algorithms.
no code implementations • 30 Sep 2022 • Lennart Brocki, Wistan Marchadour, Jonas Maison, Bogdan Badic, Panagiotis Papadimitroulas, Mathieu Hatt, Franck Vermet, Neo Christopher Chung
Interestingly, there was a critical discrepancy between model-centric (fidelity) and human-centric (ROC and DSC) evaluation.
no code implementations • 6 Mar 2022 • Lennart Brocki, Neo Christopher Chung
Despite excellent performance of deep neural networks (DNNs) in image classification, detection, and prediction, characterizing how DNNs make a given decision remains an open problem, resulting in a number of interpretability methods.
no code implementations • 7 Oct 2021 • Neo Christopher Chung
Overall, these HITL approaches will increase interaction between human and AI, and thus help the future AI systems to better understand our own creative and emotional processes.
no code implementations • 10 Nov 2020 • Lennart Brocki, Neo Christopher Chung
In particular, gradients of classes or concepts with respect to the input features (e. g., pixels in images) are often used as importance scores or estimators, which are visualized in saliency maps.
1 code implementation • 29 Oct 2019 • Lennart Brocki, Neo Christopher Chung
Evaluating, explaining, and visualizing high-level concepts in generative models, such as variational autoencoders (VAEs), is challenging in part due to a lack of known prediction classes that are required to generate saliency maps in supervised learning.