Search Results for author: Neo Christopher Chung

Found 11 papers, 3 papers with code

False Sense of Security in Explainable Artificial Intelligence (XAI)

no code implementations6 May 2024 Neo Christopher Chung, Hongkyou Chung, Hearim Lee, Hongbeom Chung, Lennart Brocki, George Dyer

"Explainability" has multiple meanings which are often used interchangeably, and there are an even greater number of XAI methods - none of which presents a clear edge.

Class-Discriminative Attention Maps for Vision Transformers

1 code implementation4 Dec 2023 Lennart Brocki, Neo Christopher Chung

We introduce class-discriminative attention maps (CDAM), a novel post-hoc explanation method that is highly sensitive to the target class.

Self-Supervised Learning Semantic Segmentation

Challenges of Large Language Models for Mental Health Counseling

no code implementations23 Nov 2023 Neo Christopher Chung, George Dyer, Lennart Brocki

The global mental health crisis is looming with a rapid increase in mental disorders, limited resources, and the social stigma of seeking treatment.

Hallucination Navigate

Integration of Radiomics and Tumor Biomarkers in Interpretable Machine Learning Models

no code implementations20 Mar 2023 Lennart Brocki, Neo Christopher Chung

Therefore, we study and propose the integration of expert-derived radiomics and DNN-predicted biomarkers in interpretable classifiers which we call ConRad, for computerized tomography (CT) scans of lung cancer.

feature selection Interpretable Machine Learning

Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural Networks

1 code implementation2 Mar 2023 Lennart Brocki, Neo Christopher Chung

However, the change in the prediction outcome may stem from perturbation artifacts, since perturbed samples in the test dataset are out of distribution (OOD) compared to the training dataset and can therefore potentially disturb the model in an unexpected manner.

Data Augmentation

Deep Learning Mental Health Dialogue System

no code implementations23 Jan 2023 Lennart Brocki, George C. Dyer, Anna Gładka, Neo Christopher Chung

The system consists of a core generative model and post-processing algorithms.

Hallucination

Fidelity of Interpretability Methods and Perturbation Artifacts in Neural Networks

no code implementations6 Mar 2022 Lennart Brocki, Neo Christopher Chung

Despite excellent performance of deep neural networks (DNNs) in image classification, detection, and prediction, characterizing how DNNs make a given decision remains an open problem, resulting in a number of interpretability methods.

Image Classification

Human in the Loop for Machine Creativity

no code implementations7 Oct 2021 Neo Christopher Chung

Overall, these HITL approaches will increase interaction between human and AI, and thus help the future AI systems to better understand our own creative and emotional processes.

Input Bias in Rectified Gradients and Modified Saliency Maps

no code implementations10 Nov 2020 Lennart Brocki, Neo Christopher Chung

In particular, gradients of classes or concepts with respect to the input features (e. g., pixels in images) are often used as importance scores or estimators, which are visualized in saliency maps.

Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models

1 code implementation29 Oct 2019 Lennart Brocki, Neo Christopher Chung

Evaluating, explaining, and visualizing high-level concepts in generative models, such as variational autoencoders (VAEs), is challenging in part due to a lack of known prediction classes that are required to generate saliency maps in supervised learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.