no code implementations • 8 Apr 2024 • Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee
One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma.
1 code implementation • NeurIPS 2023 • Dohyeok Lee, Seungyub Han, Taehyun Cho, Jungwoo Lee
Alleviating overestimation bias is a critical challenge for deep reinforcement learning to achieve successful performance on more complex tasks or offline datasets containing out-of-distribution data.
no code implementations • 3 Nov 2023 • Tuyen P. Le, Hieu T. Nguyen, Seungyeol Baek, Taeyoun Kim, Jungwoo Lee, Seongjung Kim, HyunJin Kim, Misu Jung, Daehoon Kim, Seokyong Lee, Daewoo Choi
Macro placement is a critical phase in chip design, which becomes more intricate when involving general rectilinear macros and layout areas.
no code implementations • NeurIPS 2023 • Taehyun Cho, Seungyub Han, Heesoo Lee, Kyungjae Lee, Jungwoo Lee
Distributional reinforcement learning algorithms have attempted to utilize estimated uncertainty for exploration, such as optimism in the face of uncertainty.
Distributional Reinforcement Learning reinforcement-learning
2 code implementations • 15 Aug 2023 • Seokhyeon Ha, Sunbeom Jung, Jungwoo Lee
By leveraging batch normalization layers and integrating linear probing and fine-tuning, our DAFT significantly mitigates feature distortion and achieves improved model performance on both in-distribution and out-of-distribution datasets.
no code implementations • 13 May 2023 • Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi
We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks.
2 code implementations • CVPR 2023 • Youngwook Kim, Jae Myung Kim, Jieun Jeong, Cordelia Schmid, Zeynep Akata, Jungwoo Lee
Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
1 code implementation • CVPR 2022 • Youngwook Kim, Jae Myung Kim, Zeynep Akata, Jungwoo Lee
In this work, we first regard unobserved labels as negative labels, casting the WSML task into noisy multi-label classification.
no code implementations • 29 Sep 2021 • Jae Myung Kim, Eunji Kim, Sungroh Yoon, Jungwoo Lee, Cordelia Schmid, Zeynep Akata
Explaining a complex black-box system in a post-hoc manner is important to understand its predictions.
no code implementations • 29 Sep 2021 • Hyungjun Joo, Seokhyeon Ha, Jae Myung Kim, Sungyeob Han, Jungwoo Lee
As deep learning has been successfully deployed in diverse applications, there is ever increasing need for explaining its decision.
no code implementations • 29 Sep 2021 • Tae Hyun Cho, Sungyeob Han, Heesoo Lee, Kyungjae Lee, Jungwoo Lee
Distributional reinforcement learning aims to learn distribution of return under stochastic environments.
no code implementations • 29 Sep 2021 • Jaehak Cho, Jae Myung Kim, Sungyeob Han, Jungwoo Lee
To address the issue, we propose a novel method that generates a union of disjoint PIs.
no code implementations • 29 Sep 2021 • Sungyeob Han, Yeongmo Kim, Jungwoo Lee
The memory based continual learning stores a small subset of the data for previous tasks and applies various methods such as quadratic programming and sample selection.
no code implementations • 1 Jan 2021 • Sungyeob Han, Yeongmo Kim, Jungwoo Lee
We also show that memory-based approaches have an inherent problem of overfitting to memory, which degrades the performance on previously learned tasks, namely catastrophic forgetting.
no code implementations • 1 Jan 2021 • Jae Myung Kim, Eunji Kim, Seokhyeon Ha, Sungroh Yoon, Jungwoo Lee
Saliency maps have been widely used to explain the behavior of an image classifier.
no code implementations • 17 Aug 2020 • Minchul Kim, Jungwoo Lee
We consider information-theoretic privacy in federated submodel learning, where a global server has multiple submodels.
Information Theory Information Theory
no code implementations • 16 Feb 2020 • Jae Myung Kim, Hyungjin Kim, Chanwoo Park, Jungwoo Lee
Our work aims to improve the robustness by adding a REST module in front of any black boxes and training only the REST module without retraining the original black box model in an end-to-end manner, i. e. we try to convert the real-world data into training distribution which the performance of the black-box model is best suited for.
no code implementations • 25 Sep 2019 • Hyeungill Lee, Sungyeob Han, Jungwoo Lee
However, these adversarial attack methods used in these techniques are fixed, making the model stronger only to attacks used in training, which is widely known as an overfitting problem.
no code implementations • ICLR 2019 • Sungyeob Han, Daeyoung Kim, Jungwoo Lee
We propose a novel unsupervised classification method based on graph Laplacian.
no code implementations • 8 Dec 2018 • Chanwoo Park, Jae Myung Kim, Seok Hyeon Ha, Jungwoo Lee
In this paper, we show that predictive uncertainty can be efficiently estimated when we incorporate the concept of gradients uncertainty into posterior sampling.
no code implementations • 9 Oct 2017 • Jungwoo Lee, Dongjin Choi, Lee Sael
Also, CTD-S is made 5 ~ 86x faster, and 7 ~ 12x more memory-efficient than the state-of-the-art method by removing redundancy.
no code implementations • 9 May 2017 • Hyeungill Lee, Sungyeob Han, Jungwoo Lee
The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image.