Search Results for author: Jungwoo Lee

Found 22 papers, 4 papers with code

On the Convergence of Continual Learning with Adaptive Methods

no code implementations8 Apr 2024 Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee

One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma.

Continual Learning Image Classification

SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning

1 code implementation NeurIPS 2023 Dohyeok Lee, Seungyub Han, Taehyun Cho, Jungwoo Lee

Alleviating overestimation bias is a critical challenge for deep reinforcement learning to achieve successful performance on more complex tasks or offline datasets containing out-of-distribution data.

Offline RL Q-Learning +1

Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion

no code implementations NeurIPS 2023 Taehyun Cho, Seungyub Han, Heesoo Lee, Kyungjae Lee, Jungwoo Lee

Distributional reinforcement learning algorithms have attempted to utilize estimated uncertainty for exploration, such as optimism in the face of uncertainty.

Distributional Reinforcement Learning reinforcement-learning

Domain-Aware Fine-Tuning: Enhancing Neural Network Adaptability

2 code implementations15 Aug 2023 Seokhyeon Ha, Sunbeom Jung, Jungwoo Lee

By leveraging batch normalization layers and integrating linear probing and fine-tuning, our DAFT significantly mitigates feature distortion and achieves improved model performance on both in-distribution and out-of-distribution datasets.

Learning to Learn Unlearned Feature for Brain Tumor Segmentation

no code implementations13 May 2023 Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi

We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks.

Active Learning Brain Tumor Segmentation +6

Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification

2 code implementations CVPR 2023 Youngwook Kim, Jae Myung Kim, Jieun Jeong, Cordelia Schmid, Zeynep Akata, Jungwoo Lee

Based on these findings, we propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.

Classification Multi-Label Classification

Large Loss Matters in Weakly Supervised Multi-Label Classification

1 code implementation CVPR 2022 Youngwook Kim, Jae Myung Kim, Zeynep Akata, Jungwoo Lee

In this work, we first regard unobserved labels as negative labels, casting the WSML task into noisy multi-label classification.

Classification Memorization +1

Variational Perturbations for Visual Feature Attribution

no code implementations29 Sep 2021 Jae Myung Kim, Eunji Kim, Sungroh Yoon, Jungwoo Lee, Cordelia Schmid, Zeynep Akata

Explaining a complex black-box system in a post-hoc manner is important to understand its predictions.

Beyond Examples: Constructing Explanation Space for Explaining Prototypes

no code implementations29 Sep 2021 Hyungjun Joo, Seokhyeon Ha, Jae Myung Kim, Sungyeob Han, Jungwoo Lee

As deep learning has been successfully deployed in diverse applications, there is ever increasing need for explaining its decision.

On the Convergence of Nonconvex Continual Learning with Adaptive Learning Rate

no code implementations29 Sep 2021 Sungyeob Han, Yeongmo Kim, Jungwoo Lee

The memory based continual learning stores a small subset of the data for previous tasks and applies various methods such as quadratic programming and sample selection.

Continual Learning Image Classification

Nonconvex Continual Learning with Episodic Memory

no code implementations1 Jan 2021 Sungyeob Han, Yeongmo Kim, Jungwoo Lee

We also show that memory-based approaches have an inherent problem of overfitting to memory, which degrades the performance on previously learned tasks, namely catastrophic forgetting.

Continual Learning Image Classification

Variational saliency maps for explaining model's behavior

no code implementations1 Jan 2021 Jae Myung Kim, Eunji Kim, Seokhyeon Ha, Sungroh Yoon, Jungwoo Lee

Saliency maps have been widely used to explain the behavior of an image classifier.

Information-Theoretic Privacy in Federated Submodel learning

no code implementations17 Aug 2020 Minchul Kim, Jungwoo Lee

We consider information-theoretic privacy in federated submodel learning, where a global server has multiple submodels.

Information Theory Information Theory

REST: Performance Improvement of a Black Box Model via RL-based Spatial Transformation

no code implementations16 Feb 2020 Jae Myung Kim, Hyungjin Kim, Chanwoo Park, Jungwoo Lee

Our work aims to improve the robustness by adding a REST module in front of any black boxes and training only the REST module without retraining the original black box model in an end-to-end manner, i. e. we try to convert the real-world data into training distribution which the performance of the black-box model is best suited for.

Adversarial training with perturbation generator networks

no code implementations25 Sep 2019 Hyeungill Lee, Sungyeob Han, Jungwoo Lee

However, these adversarial attack methods used in these techniques are fixed, making the model stronger only to attacks used in training, which is widely known as an overfitting problem.

Adversarial Attack

Sampling-based Bayesian Inference with gradient uncertainty

no code implementations8 Dec 2018 Chanwoo Park, Jae Myung Kim, Seok Hyeon Ha, Jungwoo Lee

In this paper, we show that predictive uncertainty can be efficiently estimated when we incorporate the concept of gradients uncertainty into posterior sampling.

Bayesian Inference

CTD: Fast, Accurate, and Interpretable Method for Static and Dynamic Tensor Decompositions

no code implementations9 Oct 2017 Jungwoo Lee, Dongjin Choi, Lee Sael

Also, CTD-S is made 5 ~ 86x faster, and 7 ~ 12x more memory-efficient than the state-of-the-art method by removing redundancy.

Tensor Decomposition

Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN

no code implementations9 May 2017 Hyeungill Lee, Sungyeob Han, Jungwoo Lee

The generator network generates an adversarial perturbation that can easily fool the classifier network by using a gradient of each image.

Adversarial Defense Generative Adversarial Network

Cannot find the paper you are looking for? You can Submit a new open access paper.