Search Results for author: Zhenyi Wang

Found 28 papers, 14 papers with code

A Unified and General Framework for Continual Learning

1 code implementation20 Mar 2024 Zhenyi Wang, Yan Li, Li Shen, Heng Huang

Extensive experiments on CL benchmarks and theoretical analysis demonstrate the effectiveness of the proposed refresh learning.

Continual Learning

Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt

no code implementations14 Mar 2024 Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang

Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones.

Few-Shot Class-Incremental Learning Incremental Learning

Representation Surgery for Multi-Task Model Merging

1 code implementation5 Feb 2024 Enneng Yang, Li Shen, Zhenyi Wang, Guibing Guo, Xiaojun Chen, Xingwei Wang, DaCheng Tao

That is, there is a significant discrepancy in the representation distribution between the merged and individual models, resulting in poor performance of merged MTL.

Computational Efficiency Multi-Task Learning

Task-Distributionally Robust Data-Free Meta-Learning

no code implementations23 Nov 2023 Zixuan Hu, Li Shen, Zhenyi Wang, Yongxian Wei, Baoyuan Wu, Chun Yuan, DaCheng Tao

TDS leads to a biased meta-learner because of the skewed task distribution towards newly generated tasks.

Meta-Learning Model Selection

AdaMerging: Adaptive Model Merging for Multi-Task Learning

1 code implementation4 Oct 2023 Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, DaCheng Tao

This approach aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.

Multi-Task Learning

Continual Learning From a Stream of APIs

no code implementations31 Aug 2023 Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing Guo, Xingwei Wang, DaCheng Tao

Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model.

Continual Learning

Distributionally Robust Cross Subject EEG Decoding

no code implementations19 Aug 2023 Tiehang Duan, Zhenyi Wang, Gianfranco Doretto, Fang Li, Cui Tao, Donald Adjeroh

In this work, we propose a principled approach to perform dynamic evolution on the data for improvement of decoding robustness.

Data Augmentation EEG +1

A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

1 code implementation16 Jul 2023 Zhenyi Wang, Enneng Yang, Li Shen, Heng Huang

Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting.

Continual Learning Federated Learning +1

Learning to Learn from APIs: Black-Box Data-Free Meta-Learning

1 code implementation28 May 2023 Zixuan Hu, Li Shen, Zhenyi Wang, Baoyuan Wu, Chun Yuan, DaCheng Tao

Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data.

Few-Shot Learning Knowledge Distillation

Customized Load Profiles Synthesis for Electricity Customers Based on Conditional Diffusion Models

no code implementations24 Apr 2023 Zhenyi Wang, Hongcai Zhang

In this paper, we propose a novel customized load profiles synthesis method based on conditional diffusion models for heterogeneous customers.

Noise Estimation

Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning

1 code implementation CVPR 2023 Zixuan Hu, Li Shen, Zhenyi Wang, Tongliang Liu, Chun Yuan, DaCheng Tao

The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data.

Meta-Learning

Data Augmented Flatness-aware Gradient Projection for Continual Learning

no code implementations ICCV 2023 Enneng Yang, Li Shen, Zhenyi Wang, Shiwei Liu, Guibing Guo, Xingwei Wang

In this paper, we first revisit the gradient projection method from the perspective of flatness of loss surface, and find that unflatness of the loss surface leads to catastrophic forgetting of the old tasks when the projection constraint is reduced to improve the performance of new tasks.

Continual Learning

MetaMix: Towards Corruption-Robust Continual Learning With Temporally Self-Adaptive Data Transformation

no code implementations CVPR 2023 Zhenyi Wang, Li Shen, Donglin Zhan, Qiuling Suo, Yanjun Zhu, Tiehang Duan, Mingchen Gao

To make them trustworthy and robust to corruptions deployed in safety-critical scenarios, we propose a meta-learning framework of self-adaptive data augmentation to tackle the corruption robustness in CL.

Continual Learning Data Augmentation +1

Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions

1 code implementation3 Sep 2022 Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Donglin Zhan, Tiehang Duan, Mingchen Gao

Two key challenges arise in this more realistic setting: (i) how to use unlabeled data in the presence of a large amount of unlabeled out-of-distribution (OOD) data; and (ii) how to prevent catastrophic forgetting on previously learned task distributions due to the task distribution shift.

Meta-Learning

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

1 code implementation15 Jul 2022 Zhenyi Wang, Li Shen, Le Fang, Qiuling Suo, Tiehang Duan, Mingchen Gao

To address these problems, for the first time, we propose a principled memory evolution framework to dynamically evolve the memory data distribution by making the memory buffer gradually harder to be memorized with distributionally robust optimization (DRO).

Continual Learning

Space4HGNN: A Novel, Modularized and Reproducible Platform to Evaluate Heterogeneous Graph Neural Network

1 code implementation18 Feb 2022 Tianyu Zhao, Cheng Yang, Yibo Li, Quan Gan, Zhenyi Wang, Fengqi Liang, Huan Zhao, Yingxia Shao, Xiao Wang, Chuan Shi

Heterogeneous Graph Neural Network (HGNN) has been successfully employed in various tasks, but we cannot accurately know the importance of different design dimensions of HGNNs due to diverse architectures and applied scenarios.

Learning To Learn and Remember Super Long Multi-Domain Task Sequence

1 code implementation CVPR 2022 Zhenyi Wang, Li Shen, Tiehang Duan, Donglin Zhan, Le Fang, Mingchen Gao

We propose a domain shift detection technique to capture latent domain change and equip the meta optimizer with it to work in this setting.

Meta-Learning

Uncertainty Detection and Reduction in Neural Decoding of EEG Signals

1 code implementation28 Dec 2021 Tiehang Duan, Zhenyi Wang, Sheng Liu, Sargur N. Srihari, Hui Yang

In this work, we proposed an uncertainty estimation and reduction model (UNCER) to quantify and mitigate the uncertainty during the EEG decoding process.

Data Augmentation Decision Making +3

Meta Learning on a Sequence of Imbalanced Domains with Difficulty Awareness

1 code implementation ICCV 2021 Zhenyi Wang, Tiehang Duan, Le Fang, Qiuling Suo, Mingchen Gao

In this paper, we explore a more practical and challenging setting where task distribution changes over time with domain shift.

Change Detection Management +1

Meta-Learning with Neural Tangent Kernels

no code implementations7 Feb 2021 Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu

We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.

Meta-Learning

Towards Learning to Remember in Meta Learning of Sequential Domains

no code implementations1 Jan 2021 Zhenyi Wang, Tiehang Duan, Donglin Zhan, Changyou Chen

However, a natural generalization to the sequential domain setting to avoid catastrophe forgetting has not been well investigated.

Continual Learning Meta-Learning

Meta-Learning in Reproducing Kernel Hilbert Space

no code implementations ICLR 2021 Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu

Within this paradigm, we introduce two meta learning algorithms in RKHS, which no longer need an explicit inner-loop adaptation as in the MAML framework.

Meta-Learning

A Multitask Deep Learning Approach for User Depression Detection on Sina Weibo

no code implementations26 Aug 2020 Yiding Wang, Zhenyi Wang, Chenghao Li, Yilin Zhang, Haizhou Wang

In recent years, due to the mental burden of depression, the number of people who endanger their lives has been increasing rapidly.

Classification Depression Detection +2

Bayesian Meta Sampling for Fast Uncertainty Adaptation

1 code implementation ICLR 2020 Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, Changyou Chen

Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter.

Meta-Learning

Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions

1 code implementation AAAI 2019 Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, Changyou Chen

In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.

Action Generation

Location Augmentation for CNN

no code implementations18 Jul 2018 Zhenyi Wang, Olga Veksler

For example, a salient object is more likely to be closer to the center of the image, the sky in the top part of an image, etc.

Scene Parsing Semantic Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.