no code implementations • 2 Feb 2023 • Ya-nan Han, Jian-wei Liu
However, a key challenge in this continual learning paradigm is catastrophic forgetting, namely adapting a model to new tasks often leads to severe performance degradation on prior tasks.
no code implementations • 12 Sep 2022 • Ya-nan Han, Jian-wei Liu
In this paper, we overcome these challenges by proposing a novel framework called Meta-learning update via Multi-scale Knowledge Distillation and Data Augmentation (MMKDDA).
no code implementations • 9 Sep 2022 • Ya-nan Han, Jian-wei Liu
Based on this fact, in this paper we propose a new framework, named Selecting Related Knowledge for Online Continual Learning (SRKOCL), which incorporates an additional efficient channel attention mechanism to pick the particular related knowledge for every task.
no code implementations • 25 Jan 2022 • Ya-nan Han, Jian-wei Liu, Bing-biao Xiao, Xin-Tan Wang, Xiong-lin Luo
In this paper, we proposed a new Bilevel Online Deep Learning (BODL) framework, which combine bilevel optimization strategy and online ensemble classifier.