Search Results for author: Seungyub Han

Found 4 papers, 1 papers with code

On the Convergence of Continual Learning with Adaptive Methods

no code implementations8 Apr 2024 Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee

One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma.

Continual Learning Image Classification

SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning

1 code implementation NeurIPS 2023 Dohyeok Lee, Seungyub Han, Taehyun Cho, Jungwoo Lee

Alleviating overestimation bias is a critical challenge for deep reinforcement learning to achieve successful performance on more complex tasks or offline datasets containing out-of-distribution data.

Offline RL Q-Learning +1

Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion

no code implementations NeurIPS 2023 Taehyun Cho, Seungyub Han, Heesoo Lee, Kyungjae Lee, Jungwoo Lee

Distributional reinforcement learning algorithms have attempted to utilize estimated uncertainty for exploration, such as optimism in the face of uncertainty.

Distributional Reinforcement Learning reinforcement-learning

Learning to Learn Unlearned Feature for Brain Tumor Segmentation

no code implementations13 May 2023 Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi

We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks.

Active Learning Brain Tumor Segmentation +6

Cannot find the paper you are looking for? You can Submit a new open access paper.