2 code implementations • 15 Aug 2023 • Seokhyeon Ha, Sunbeom Jung, Jungwoo Lee
By leveraging batch normalization layers and integrating linear probing and fine-tuning, our DAFT significantly mitigates feature distortion and achieves improved model performance on both in-distribution and out-of-distribution datasets.
no code implementations • 13 May 2023 • Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi
We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks.
no code implementations • 29 Sep 2021 • Hyungjun Joo, Seokhyeon Ha, Jae Myung Kim, Sungyeob Han, Jungwoo Lee
As deep learning has been successfully deployed in diverse applications, there is ever increasing need for explaining its decision.
no code implementations • 1 Jan 2021 • Jae Myung Kim, Eunji Kim, Seokhyeon Ha, Sungroh Yoon, Jungwoo Lee
Saliency maps have been widely used to explain the behavior of an image classifier.