1 code implementation • 20 Mar 2024 • Zhenyi Wang, Yan Li, Li Shen, Heng Huang
Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.
no code implementations • 14 Mar 2024 • Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang
Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones.
1 code implementation • 5 Feb 2024 • Enneng Yang, Li Shen, Zhenyi Wang, Guibing Guo, Xiaojun Chen, Xingwei Wang, DaCheng Tao
That is, there is a significant discrepancy in the representation distribution between the merged and individual models, resulting in poor performance of merged MTL.
no code implementations • 23 Nov 2023 • Zixuan Hu, Li Shen, Zhenyi Wang, Yongxian Wei, Baoyuan Wu, Chun Yuan, DaCheng Tao
TDS leads to a biased meta-learner because of the skewed task distribution towards newly generated tasks.
1 code implementation • 4 Oct 2023 • Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, DaCheng Tao
This approach aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
no code implementations • 31 Aug 2023 • Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing Guo, Xingwei Wang, DaCheng Tao
Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model.
no code implementations • 19 Aug 2023 • Tiehang Duan, Zhenyi Wang, Gianfranco Doretto, Fang Li, Cui Tao, Donald Adjeroh
In this work, we propose a principled approach to perform dynamic evolution on the data for improvement of decoding robustness.
1 code implementation • 16 Jul 2023 • Zhenyi Wang, Enneng Yang, Li Shen, Heng Huang
Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting.
1 code implementation • 28 May 2023 • Zixuan Hu, Li Shen, Zhenyi Wang, Baoyuan Wu, Chun Yuan, DaCheng Tao
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data.
no code implementations • 24 Apr 2023 • Zhenyi Wang, Hongcai Zhang
In this paper, we propose a novel customized load profiles synthesis method based on conditional diffusion models for heterogeneous customers.
1 code implementation • CVPR 2023 • Zixuan Hu, Li Shen, Zhenyi Wang, Tongliang Liu, Chun Yuan, DaCheng Tao
The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data.
no code implementations • ICCV 2023 • Enneng Yang, Li Shen, Zhenyi Wang, Shiwei Liu, Guibing Guo, Xingwei Wang
In this paper, we first revisit the gradient projection method from the perspective of flatness of loss surface, and find that unflatness of the loss surface leads to catastrophic forgetting of the old tasks when the projection constraint is reduced to improve the performance of new tasks.
no code implementations • CVPR 2023 • Zhenyi Wang, Li Shen, Donglin Zhan, Qiuling Suo, Yanjun Zhu, Tiehang Duan, Mingchen Gao
To make them trustworthy and robust to corruptions deployed in safety-critical scenarios, we propose a meta-learning framework of self-adaptive data augmentation to tackle the corruption robustness in CL.
1 code implementation • 3 Sep 2022 • Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Donglin Zhan, Tiehang Duan, Mingchen Gao
Two key challenges arise in this more realistic setting: (i) how to use unlabeled data in the presence of a large amount of unlabeled out-of-distribution (OOD) data; and (ii) how to prevent catastrophic forgetting on previously learned task distributions due to the task distribution shift.
1 code implementation • 15 Jul 2022 • Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Tiehang Duan, Mingchen Gao
To address these problems, for the first time, we propose a principled memory evolution framework to dynamically evolve the memory data distribution by making the memory buffer gradually harder to be memorized with distributionally robust optimization (DRO).
1 code implementation • 18 Feb 2022 • Tianyu Zhao, Cheng Yang, Yibo Li, Quan Gan, Zhenyi Wang, Fengqi Liang, Huan Zhao, Yingxia Shao, Xiao Wang, Chuan Shi
Heterogeneous Graph Neural Network (HGNN) has been successfully employed in various tasks, but we cannot accurately know the importance of different design dimensions of HGNNs due to diverse architectures and applied scenarios.
1 code implementation • CVPR 2022 • Zhenyi Wang, Li Shen, Tiehang Duan, Donglin Zhan, Le Fang, Mingchen Gao
We propose a domain shift detection technique to capture latent domain change and equip the meta optimizer with it to work in this setting.
1 code implementation • 28 Dec 2021 • Tiehang Duan, Zhenyi Wang, Sheng Liu, Sargur N. Srihari, Hui Yang
In this work, we proposed an uncertainty estimation and reduction model (UNCER) to quantify and mitigate the uncertainty during the EEG decoding process.
1 code implementation • ICCV 2021 • Zhenyi Wang, Tiehang Duan, Le Fang, Qiuling Suo, Mingchen Gao
In this paper, we explore a more practical and challenging setting where task distribution changes over time with domain shift.
no code implementations • 7 Feb 2021 • Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
no code implementations • 1 Jan 2021 • Zhenyi Wang, Tiehang Duan, Donglin Zhan, Changyou Chen
However, a natural generalization to the sequential domain setting to avoid catastrophe forgetting has not been well investigated.
no code implementations • ICLR 2021 • Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
Within this paradigm, we introduce two meta learning algorithms in RKHS, which no longer need an explicit inner-loop adaptation as in the MAML framework.
no code implementations • EMNLP 2020 • Bang An, Jie Lyu, Zhenyi Wang, Chunyuan Li, Changwei Hu, Fei Tan, Ruiyi Zhang, Yifan Hu, Changyou Chen
The neural attention mechanism plays an important role in many natural language processing applications.
no code implementations • 26 Aug 2020 • Yiding Wang, Zhenyi Wang, Chenghao Li, Yilin Zhang, Haizhou Wang
In recent years, due to the mental burden of depression, the number of people who endanger their lives has been increasing rapidly.
no code implementations • ACL 2020 • Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, Changyou Chen
Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions.
1 code implementation • ICLR 2020 • Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, Changyou Chen
Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter.
1 code implementation • AAAI 2019 • Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, Changyou Chen
In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.
Ranked #4 on Human action generation on NTU RGB+D 2D
no code implementations • 18 Jul 2018 • Zhenyi Wang, Olga Veksler
For example, a salient object is more likely to be closer to the center of the image, the sky in the top part of an image, etc.