no code implementations • 14 Dec 2022 • Jongseong Jang, Daeun Kyung, Seung Hwan Kim, Honglak Lee, Kyunghoon Bae, Edward Choi
However, large-scale and high-quality data to train powerful neural networks are rare in the medical domain as the labeling must be done by qualified experts.
1 code implementation • 14 Mar 2022 • Ruiwen Li, Zheda Mai, Chiheb Trabelsi, Zhibo Zhang, Jongseong Jang, Scott Sanner
In this paper, we propose TransCAM, a Conformer-based solution to WSSS that explicitly leverages the attention weights from the transformer branch of the Conformer to refine the CAM generated from the CNN branch.
Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation
1 code implementation • 28 Nov 2021 • Zhibo Zhang, Jongseong Jang, Chiheb Trabelsi, Ruiwen Li, Scott Sanner, Yeonjeong Jeong, Dongsub Shim
Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification.
no code implementations • 25 Nov 2021 • ChangHwan Lee, Yeesuk Kim, Bong Gun Lee, Doosup Kim, Jongseong Jang
In addition, using the class activation map, we identified that the Cut\&Remain methods drive a model to focus on relevant subtle and small regions efficiently.
no code implementations • 29 May 2021 • Ruiwen Li, Zhibo Zhang, Jiani Li, Chiheb Trabelsi, Scott Sanner, Jongseong Jang, Yeonjeong Jeong, Dongsub Shim
Recent years have seen the introduction of a range of methods for post-hoc explainability of image classifier predictions.
1 code implementation • 15 Feb 2021 • Sam Sattarzadeh, Mahesh Sudhakar, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
However, the average gradient-based terms deployed in this method underestimates the contribution of the representations discovered by the model to its predictions.
no code implementations • 15 Feb 2021 • Mahesh Sudhakar, Sam Sattarzadeh, Konstantinos N. Plataniotis, Jongseong Jang, Yeonjeong Jeong, Hyunwoo Kim
Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models.
Computational Efficiency Explainable Artificial Intelligence (XAI)
no code implementations • 1 Oct 2020 • Sam Sattarzadeh, Mahesh Sudhakar, Anthony Lem, Shervin Mehryar, K. N. Plataniotis, Jongseong Jang, Hyunwoo Kim, Yeonjeong Jeong, Sangmin Lee, Kyunghoon Bae
In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation.
3 code implementations • 31 Aug 2020 • Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, Jongseong Jang
As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption.
no code implementations • 19 Sep 2019 • Jihyeun Yoon, Kyungyul Kim, Jongseong Jang
Deep Neural Network based classifiers are known to be vulnerable to perturbations of inputs constructed by an adversarial attack to force misclassification.
Adversarial Attack Explainable Artificial Intelligence (XAI)