Search Results for author: Yunfeng Fan

Found 5 papers, 0 papers with code

Balanced Multi-modal Federated Learning via Cross-Modal Infiltration

no code implementations31 Dec 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Song Guo

Federated learning (FL) underpins advancements in privacy-preserving distributed computing by collaboratively training neural networks without exposing clients' raw data.

Distributed Computing Federated Learning +2

Client-wise Modality Selection for Balanced Multi-modal Federated Learning

no code implementations31 Dec 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Penghui Ruan, Song Guo

Selecting proper clients to participate in the iterative federated learning (FL) rounds is critical to effectively harness a broad range of distributed datasets.

Federated Learning Selection bias

Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement

no code implementations20 Mar 2023 Fushuo Huo, Wenchao Xu, Jingcai Guo, Haozhao Wang, Yunfeng Fan, Song Guo

In this paper, we propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem, which consists of two strategies: 1) Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction.

Continual Learning

DualMix: Unleashing the Potential of Data Augmentation for Online Class-Incremental Learning

no code implementations14 Mar 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Jiaqi Zhu, Junxiao Wang, Song Guo

Unfortunately, OCI learning can suffer from catastrophic forgetting (CF) as the decision boundaries for old classes can become inaccurate when perturbated by new ones.

Class Incremental Learning Data Augmentation +1

PMR: Prototypical Modal Rebalance for Multimodal Learning

no code implementations CVPR 2023 Yunfeng Fan, Wenchao Xu, Haozhao Wang, Junxiao Wang, Song Guo

Multimodal learning (MML) aims to jointly exploit the common priors of different modalities to compensate for their inherent limitations.

Cannot find the paper you are looking for? You can Submit a new open access paper.