Search Results for author: Yiqiang Chen

Found 37 papers, 17 papers with code

Generative AI for Synthetic Data Generation: Methods, Challenges and the Future

no code implementations7 Mar 2024 Xu Guo, Yiqiang Chen

The recent surge in research focused on generating synthetic data from large language models (LLMs), especially for scenarios with limited data availability, marks a notable shift in Generative Artificial Intelligence (AI).

Synthetic Data Generation

Self-supervised Learning for Electroencephalogram: A Systematic Survey

no code implementations9 Jan 2024 Weining Weng, Yang Gu, Shuai Guo, Yuan Ma, Zhaohua Yang, Yuchen Liu, Yiqiang Chen

2) We provide a comprehensive review of SSL for EEG analysis, including taxonomy, methodology, and technique details of the existing EEG-based SSL frameworks, and discuss the difference between these methods.

EEG Self-Supervised Learning

Differentially Private Pre-Trained Model Fusion using Decentralized Federated Graph Matching

no code implementations5 Nov 2023 Qian Chen, Yiqiang Chen, Xinlong Jiang, Teng Zhang, Weiwei Dai, Wuliang Huang, Zhen Yan, Bo Ye

Model fusion is becoming a crucial component in the context of model-as-a-service scenarios, enabling the delivery of high-quality model services to local users.

Graph Matching Privacy Preserving

GestureGPT: Zero-shot Interactive Gesture Understanding and Grounding with Large Language Model Agents

no code implementations19 Oct 2023 Xin Zeng, Xiaoyu Wang, Tengxiang Zhang, Chun Yu, Shengdong Zhao, Yiqiang Chen

Current gesture recognition systems primarily focus on identifying gestures within a predefined set, leaving a gap in connecting these gestures to interactive GUI elements or system functions (e. g., linking a 'thumb-up' gesture to a 'like' button).

Gesture Recognition Language Modelling +1

ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning

1 code implementation8 Oct 2023 Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji

When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.

Personalized Federated Learning

A Knowledge-Driven Cross-view Contrastive Learning for EEG Representation

no code implementations21 Sep 2023 Weining Weng, Yang Gu, Qihui Zhang, Yingying Huang, Chunyan Miao, Yiqiang Chen

Due to the abundant neurophysiological information in the electroencephalogram (EEG) signal, EEG signals integrated with deep learning methods have gained substantial traction across numerous real-world tasks.

Contrastive Learning EEG

FedBone: Towards Large-Scale Federated Multi-Task Learning

no code implementations30 Jun 2023 Yiqiang Chen, Teng Zhang, Xinlong Jiang, Qian Chen, Chenlong Gao, Wuliang Huang

The conflicting gradient projection technique is used to enhance the generalization of the large-scale general model between different tasks.

Federated Learning Multi-Task Learning

FIXED: Frustratingly Easy Domain Generalization with Mixup

1 code implementation7 Nov 2022 Wang Lu, Jindong Wang, Han Yu, Lei Huang, Xiang Zhang, Yiqiang Chen, Xing Xie

Firstly, Mixup cannot effectively identify the domain and class information that can be used for learning invariant representations.

Domain Generalization Image Classification +2

Domain-invariant Feature Exploration for Domain Generalization

1 code implementation25 Jul 2022 Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie

Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i. e., the property within a domain, which is agnostic to other domains.

Domain Generalization Knowledge Distillation +2

Domain Generalization for Activity Recognition via Adaptive Feature Fusion

1 code implementation21 Jul 2022 Xin Qin, Jindong Wang, Yiqiang Chen, Wang Lu, Xinlong Jiang

To this end, we propose \emph{Adaptive Feature Fusion for Activity Recognition~(AFFAR)}, a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model's generalization performance.

Domain Generalization Human Activity Recognition

MetaFed: Federated Learning among Federations with Cyclic Knowledge Distillation for Personalized Healthcare

2 code implementations17 Jun 2022 Yiqiang Chen, Wang Lu, Xin Qin, Jindong Wang, Xing Xie

Federated learning has attracted increasing attention to building models without accessing the raw user data, especially in healthcare.

Federated Learning Knowledge Distillation

Semantic-Discriminative Mixup for Generalizable Sensor-based Cross-domain Activity Recognition

no code implementations14 Jun 2022 Wang Lu, Jindong Wang, Yiqiang Chen, Sinno Jialin Pan, Chunyu Hu, Xin Qin

Training on existing data often makes the model biased towards the distribution of the training data, thus the model might perform terribly on test data with different distributions.

Cross-Domain Activity Recognition Domain Adaptation +2

Adaptive Memory Networks with Self-supervised Learning for Unsupervised Anomaly Detection

no code implementations3 Jan 2022 Yuxin Zhang, Jindong Wang, Yiqiang Chen, Han Yu, Tao Qin

In this paper, we propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges and enhance the generalization ability in unsupervised anomaly detection.

Self-Supervised Learning Sleep Stage Detection +3

Unsupervised Deep Anomaly Detection for Multi-Sensor Time-Series Signals

no code implementations27 Jul 2021 Yuxin Zhang, Yiqiang Chen, Jindong Wang, Zhiwen Pan

We empirically compare the proposed approach with several state-of-the-art anomaly detection methods on HAR and HC datasets.

Human Activity Recognition Time Series +2

Generalizing to Unseen Domains: A Survey on Domain Generalization

1 code implementation2 Mar 2021 Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu

Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.

Domain Generalization Out-of-Distribution Generalization +1

Cross-domain Activity Recognition via Substructural Optimal Transport

1 code implementation29 Jan 2021 Wang Lu, Yiqiang Chen, Jindong Wang, Xin Qin

In this paper, we propose substructure-level matching for domain adaptation (SSDA) to better utilize the locality information of activity data for accurate and efficient knowledge transfer.

Clustering Cross-Domain Activity Recognition +3

Secure Weighted Aggregation for Federated Learning

no code implementations17 Oct 2020 Jiale Guo, Ziyao Liu, Kwok-Yan Lam, Jun Zhao, Yiqiang Chen, Chaoping Xing

The situation is exacerbated by the cloud-based implementation of digital services when user data are captured and stored in distributed locations, hence aggregation of the user data for ML could be a serious breach of privacy regulations.

Cryptography and Security Distributed, Parallel, and Cluster Computing

Learning to Match Distributions for Domain Adaptation

1 code implementation17 Jul 2020 Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu

However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.

Domain Adaptation Inductive Bias

FOCUS: Dealing with Label Quality Disparity in Federated Learning

1 code implementation29 Jan 2020 Yiqiang Chen, Xiaodong Yang, Xin Qin, Han Yu, Biao Chen, Zhiqi Shen

It maintains a small set of benchmark samples on the FL server and quantifies the credibility of the client local data without directly observing them by computing the mutual cross-entropy between performance of the FL model on the local datasets and that of the client local FL model on the benchmark dataset.

Federated Learning Privacy Preserving

Transfer Learning with Dynamic Adversarial Adaptation Network

no code implementations18 Sep 2019 Chaohui Yu, Jindong Wang, Yiqiang Chen, Meiyu Huang

In this paper, we propose a novel Dynamic Adversarial Adaptation Network (DAAN) to dynamically learn domain-invariant representations while quantitatively evaluate the relative importance of global and local domain distributions.

Domain Adaptation Transfer Learning

Transfer Learning with Dynamic Distribution Adaptation

1 code implementation17 Sep 2019 Jindong Wang, Yiqiang Chen, Wenjie Feng, Han Yu, Meiyu Huang, Qiang Yang

Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.

Domain Adaptation Image Classification +2

Easy Transfer Learning By Exploiting Intra-domain Structures

1 code implementation2 Apr 2019 Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, Qiang Yang

In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.

Computational Efficiency Domain Adaptation +2

Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning

1 code implementation25 Mar 2019 Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu

In this paper, we propose a unified Transfer Channel Pruning (TCP) approach for accelerating UDA models.

Transfer Learning Unsupervised Domain Adaptation

Balanced Distribution Adaptation for Transfer Learning

no code implementations2 Jul 2018 Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, Zhiqi Shen

To tackle the distribution adaptation problem, in this paper, we propose a novel transfer learning approach, named as Balanced Distribution \underline{A}daptation~(BDA), which can adaptively leverage the importance of the marginal and conditional distribution discrepancies, and several existing methods can be treated as special cases of BDA.

Transfer Learning

Cross-position Activity Recognition with Stratified Transfer Learning

no code implementations26 Jun 2018 Yiqiang Chen, Jindong Wang, Meiyu Huang, Han Yu

STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer.

Human Activity Recognition Position +1

Stratified Transfer Learning for Cross-domain Activity Recognition

no code implementations25 Dec 2017 Jindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, Philip S. Yu

The proposed framework, referred to as Stratified Transfer Learning (STL), can dramatically improve the classification accuracy for cross-domain activity recognition.

Cross-Domain Activity Recognition General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.