no code implementations • 8 Apr 2024 • Seungyub Han, Yeongmo Kim, Taehyun Cho, Jungwoo Lee
One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma.
no code implementations • 13 May 2023 • Seungyub Han, Yeongmo Kim, Seokhyeon Ha, Jungwoo Lee, Seunghong Choi
We propose a fine-tuning algorithm for brain tumor segmentation that needs only a few data samples and helps networks not to forget the original tasks.
no code implementations • 29 Sep 2021 • Sungyeob Han, Yeongmo Kim, Jungwoo Lee
The memory based continual learning stores a small subset of the data for previous tasks and applies various methods such as quadratic programming and sample selection.
no code implementations • 1 Jan 2021 • Sungyeob Han, Yeongmo Kim, Jungwoo Lee
We also show that memory-based approaches have an inherent problem of overfitting to memory, which degrades the performance on previously learned tasks, namely catastrophic forgetting.