Search Results for author: Dongrui Wu

Found 67 papers, 24 papers with code

Mixture-of-Experts for Open Set Domain Adaptation: A Dual-Space Detection Approach

no code implementations1 Nov 2023 Zhenbang Du, Jiayu An, Jiahao Hong, Dongrui Wu

Within an MoE, different experts address different input features, producing unique expert routing patterns for different classes in a routing feature space.

Domain Adaptation

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

no code implementations1 Apr 2023 Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou

While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.

Adversarial Artifact Detection in EEG-Based Brain-Computer Interfaces

no code implementations28 Nov 2022 Xiaoqing Chen, Dongrui Wu

Detection of adversarial examples is crucial to both the understanding of this phenomenon and the defense.

Artifact Detection EEG

Facial Affect Analysis: Learning from Synthetic Data & Multi-Task Learning Challenges

1 code implementation20 Jul 2022 Siyang Li, Yifan Xu, Huanyu Wu, Dongrui Wu, Yingjie Yin, Jiajiong Cao, Jingting Ding

Facial affect analysis remains a challenging task with its setting transitioned from lab-controlled to in-the-wild situations.

Multi-Task Learning

PyTSK: A Python Toolbox for TSK Fuzzy Systems

1 code implementation7 Jun 2022 Yuqi Cui, Dongrui Wu, Xue Jiang, Yifan Xu

This paper presents PyTSK, a Python toolbox for developing Takagi-Sugeno-Kang (TSK) fuzzy systems.

Clustering

AgFlow: Fast Model Selection of Penalized PCA via Implicit Regularization Effects of Gradient Flow

no code implementations7 Oct 2021 Haiyan Jiang, Haoyi Xiong, Dongrui Wu, Ji Liu, Dejing Dou

Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction.

Dimensionality Reduction Model Selection

Exploring the Common Principal Subspace of Deep Features in Neural Networks

no code implementations6 Oct 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

Specifically, we design a new metric $\mathcal{P}$-vector to represent the principal subspace of deep features learned in a DNN, and propose to measure angles between the principal subspaces using $\mathcal{P}$-vectors.

Image Reconstruction Self-Supervised Learning

Optimization Variance: Exploring Generalization Properties of DNNs

1 code implementation3 Jun 2021 Xiao Zhang, Dongrui Wu, Haoyi Xiong, Bo Dai

Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent.

Learning Theory

Curse of Dimensionality for TSK Fuzzy Neural Networks: Explanation and Solutions

no code implementations8 Feb 2021 Yuqi Cui, Dongrui Wu, Yifan Xu

We show that two defuzzification operations, LogTSK and HTSK, the latter of which is first proposed in this paper, can avoid the saturation.

Implicit Regularization Effects of Unbiased Random Label Noises with SGD

no code implementations1 Jan 2021 Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu

Random label noises (or observational noises) widely exist in practical machinelearning settings.

Empirical Studies on the Convergence of Feature Spaces in Deep Learning

no code implementations1 Jan 2021 Haoran Liu, Haoyi Xiong, Yaqing Wang, Haozhe An, Dongrui Wu, Dejing Dou

While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e. g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared.

Image Reconstruction Self-Supervised Learning

FCM-RDpA: TSK Fuzzy Regression Model Construction Using Fuzzy C-Means Clustering, Regularization, DropRule, and Powerball AdaBelief

2 code implementations30 Nov 2020 Zhenhua Shi, Dongrui Wu, Chenfeng Guo, Changming Zhao, Yuqi Cui, Fei-Yue Wang

To effectively optimize Takagi-Sugeno-Kang (TSK) fuzzy systems for regression problems, a mini-batch gradient descent with regularization, DropRule, and AdaBound (MBGD-RDA) algorithm was recently proposed.

Clustering regression

A Survey on Negative Transfer

1 code implementation2 Sep 2020 Wen Zhang, Lingfei Deng, Lei Zhang, Dongrui Wu

Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate the learning in a target domain.

Multi-Task Learning

Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A Complete Pipeline

1 code implementation3 Jul 2020 Dongrui Wu, Xue Jiang, Ruimin Peng, Wanzeng Kong, Jian Huang, Zhigang Zeng

Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject, and demonstrated promising performance.

Classification EEG +4

Rethink the Connections among Generalization, Memorization and the Spectral Bias of DNNs

1 code implementation29 Apr 2020 Xiao Zhang, Haoyi Xiong, Dongrui Wu

Over-parameterized deep neural networks (DNNs) with sufficient capacity to memorize random noise can achieve excellent generalization performance, challenging the bias-variance trade-off in classical learning theory.

Learning Theory Memorization

Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016

no code implementations13 Apr 2020 Dongrui Wu, Yifan Xu, Bao-liang Lu

Usually, a calibration session is needed to collect some training data for a new subject, which is time-consuming and user unfriendly.

EEG Motor Imagery +1

Integrating Informativeness, Representativeness and Diversity in Pool-Based Sequential Active Learning for Regression

no code implementations26 Mar 2020 Ziang Liu, Dongrui Wu

It optimally selects the best few samples to label, so that a better machine learning model can be trained from the same number of labeled samples.

Active Learning BIG-bench Machine Learning +2

BoostTree and BoostForest for Ensemble Learning

1 code implementation21 Mar 2020 Changming Zhao, Dongrui Wu, Jian Huang, Ye Yuan, Hai-Tao Zhang, Ruimin Peng, Zhenhua Shi

Bootstrap aggregating (Bagging) and boosting are two popular ensemble learning approaches, which combine multiple base learners to generate a composite model for more accurate and more reliable performance.

Ensemble Learning General Classification +1

Pool-Based Unsupervised Active Learning for Regression Using Iterative Representativeness-Diversity Maximization (iRDM)

no code implementations17 Mar 2020 Ziang Liu, Xue Jiang, Hanbin Luo, Weili Fang, Jiajing Liu, Dongrui Wu

Active learning (AL) selects the most beneficial unlabeled samples to label, and hence a better machine learning model can be trained from the same number of labeled samples.

Active Learning regression

MBGD-RDA Training and Rule Pruning for Concise TSK Fuzzy Regression Models

no code implementations1 Mar 2020 Dongrui Wu

To effectively train Takagi-Sugeno-Kang (TSK) fuzzy systems for regression problems, a Mini-Batch Gradient Descent with Regularization, DropRule, and AdaBound (MBGD-RDA) algorithm was recently proposed.

regression

Supervised Enhanced Soft Subspace Clustering (SESSC) for TSK Fuzzy Classifiers

1 code implementation27 Feb 2020 Yuqi Cui, Huidong Wang, Dongrui Wu

Fuzzy c-means based clustering algorithms are frequently used for Takagi-Sugeno-Kang (TSK) fuzzy classifier antecedent parameter estimation.

Clustering

EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications

no code implementations28 Jan 2020 Xiaotong Gu, Zehong Cao, Alireza Jolfaei, Peng Xu, Dongrui Wu, Tzyy-Ping Jung, Chin-Teng Lin

Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications.

Brain Computer Interface EEG +1

Unsupervised Pool-Based Active Learning for Linear Regression

1 code implementation14 Jan 2020 Ziang Liu, Dongrui Wu

So, it is desirable to be able to select the optimal samples to label, so that a good machine learning model can be trained from a minimum amount of labeled data.

Active Learning BIG-bench Machine Learning +2

Supervised Discriminative Sparse PCA with Adaptive Neighbors for Dimensionality Reduction

1 code implementation9 Jan 2020 Zhenhua Shi, Dongrui Wu, Jian Huang, Yu-Kai Wang, Chin-Teng Lin

Approaches that preserve only the local data structure, such as locality preserving projections, are usually unsupervised (and hence cannot use label information) and uses a fixed similarity graph.

Clustering General Classification +1

EEG-based Drowsiness Estimation for Driving Safety using Deep Q-Learning

no code implementations8 Jan 2020 Yurui Ming, Dongrui Wu, Yu-Kai Wang, Yuhui Shi, Chin-Teng Lin

To the best of our knowledge, we are the first to introduce the deep reinforcement learning method to this BCI scenario, and our method can be potentially generalized to other BCI cases.

Brain Computer Interface EEG +3

Empirical Studies on the Properties of Linear Regions in Deep Neural Networks

no code implementations ICLR 2020 Xiao Zhang, Dongrui Wu

A deep neural network (DNN) with piecewise linear activations can partition the input space into numerous small linear regions, where different linear functions are fitted.

Different Set Domain Adaptation for Brain-Computer Interfaces: A Label Alignment Approach

1 code implementation3 Dec 2019 He He, Dongrui Wu

Currently, most domain adaptation approaches require the source domains to have the same feature space and label space as the target domain, which limits their applications, as the auxiliary data may have different feature spaces and/or different label spaces.

Domain Adaptation Motor Imagery +1

Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs

1 code implementation3 Dec 2019 Zihan Liu, Lubin Meng, Xiao Zhang, Weili Fang, Dongrui Wu

Multiple convolutional neural network (CNN) classifiers have been proposed for electroencephalogram (EEG) based brain-computer interfaces (BCIs).

EEG

Discriminative Joint Probability Maximum Mean Discrepancy (DJP-MMD) for Domain Adaptation

1 code implementation1 Dec 2019 Wen Zhang, Dongrui Wu

Many existing domain adaptation approaches are based on the joint MMD, which is computed as the (weighted) sum of the marginal distribution discrepancy and the conditional distribution discrepancy; however, a more natural metric may be their joint probability distribution discrepancy.

Domain Adaptation General Classification +2

Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces

no code implementations7 Nov 2019 Xue Jiang, Xiao Zhang, Dongrui Wu

Learning a good substitute model is critical to the success of these attacks, but it requires a large number of queries to the target model.

Active Learning EEG

Manifold Embedded Knowledge Transfer for Brain-Computer Interfaces

1 code implementation14 Oct 2019 Wen Zhang, Dongrui Wu

Experiments on four EEG datasets from two different BCI paradigms demonstrated that MEKT outperformed several state-of-the-art transfer learning approaches, and DTE can reduce more than half of the computational cost when the number of source subjects is large, with little sacrifice of classification accuracy.

Domain Adaptation EEG +2

EEG-Based Driver Drowsiness Estimation Using Feature Weighted Episodic Training

no code implementations25 Sep 2019 Yuqi Cuui, Yifan Xu, Dongrui Wu

A calibration session is usually required to collect some subject-specific data and tune the model parameters before applying it to a new subject, which is very inconvenient and not user-friendly.

Domain Generalization EEG

Multi-Task Deep Learning with Dynamic Programming for Embryo Early Development Stage Classification from Time-Lapse Videos

no code implementations22 Aug 2019 Zihan Liu, Bo Huang, Yuqi Cui, Yifan Xu, Bo Zhang, Lixia Zhu, Yang Wang, Lei Jin, Dongrui Wu

Accurate classification of embryo early development stages can provide embryologists valuable information for assessing the embryo quality, and hence is critical to the success of IVF.

General Classification

Multi-View Broad Learning System for Primate Oculomotor Decision Decoding

1 code implementation16 Aug 2019 Zhenhua Shi, Xiaomo Chen, Changming Zhao, He He, Veit Stuphorn, Dongrui Wu

Multi-view learning improves the learning performance by utilizing multi-view data: data collected from multiple sources, or feature sets extracted from the same data source.

MULTI-VIEW LEARNING

Optimize TSK Fuzzy Systems for Classification Problems: Mini-Batch Gradient Descent with Uniform Regularization and Batch Normalization

1 code implementation1 Aug 2019 Yuqi Cui, Jian Huang, Dongrui Wu

Takagi-Sugeno-Kang (TSK) fuzzy systems are flexible and interpretable machine learning models; however, they may not be easily optimized when the data size is large, and/or the data dimensionality is high.

General Classification Interpretable Machine Learning

Canonical Correlation Analysis (CCA) Based Multi-View Learning: An Overview

no code implementations3 Jul 2019 Chenfeng Guo, Dongrui Wu

Canonical correlation analysis (CCA) is very important in MVL, whose main idea is to map data from different views onto a common space with maximum correlation.

MULTI-VIEW LEARNING

Patch Learning

1 code implementation1 Jun 2019 Dongrui Wu, Jerry M. Mendel

There have been different strategies to improve the performance of a machine learning model, e. g., increasing the depth, width, and/or nonlinearity of the model, and using ensemble learning to aggregate multiple base/weak learners in parallel or in series.

BIG-bench Machine Learning Ensemble Learning +2

On the Vulnerability of CNN Classifiers in EEG-Based BCIs

no code implementations31 Mar 2019 Xiao Zhang, Dongrui Wu

Deep learning has been successfully used in numerous applications because of its outstanding performance and the ability to avoid manual feature engineering.

Brain Computer Interface EEG +1

Active Stacking for Heart Rate Estimation

no code implementations26 Mar 2019 Dongrui Wu, Feifei Liu, Chengyu Liu

Moreover, active learning can be used to optimally select a few trials from a new subject to label, based on which a stacking ensemble regression model can be trained to aggregate the base estimators.

Active Learning Heart rate estimation +1

Optimize TSK Fuzzy Systems for Regression Problems: Mini-Batch Gradient Descent with Regularization, DropRule and AdaBound (MBGD-RDA)

1 code implementation26 Mar 2019 Dongrui Wu, Ye Yuan, Yihua Tan

Our final algorithm, mini-batch gradient descent with regularization, DropRule and AdaBound (MBGD-RDA), can achieve fast convergence in training TSK fuzzy systems, and also superior generalization performance in testing.

Wasserstein Distance based Deep Adversarial Transfer Learning for Intelligent Fault Diagnosis

no code implementations2 Mar 2019 Cheng Cheng, Beitong Zhou, Guijun Ma, Dongrui Wu, Ye Yuan

However, for diverse working conditions in the industry, deep learning suffers two difficulties: one is that the well-defined (source domain) and new (target domain) datasets are with different feature distributions; another one is the fact that insufficient or no labelled data in target domain significantly reduce the accuracy of fault diagnosis.

Transfer Learning

Multi-Tasking Genetic Algorithm (MTGA) for Fuzzy System Optimization

1 code implementation15 Dec 2018 Dongrui Wu, Xianfeng Tan

Experiments on simultaneous optimization of type-1 and interval type-2 fuzzy logic controllers for couple-tank water level control demonstrated that the MTGA can find better fuzzy logic controllers than other approaches.

Multi-Task Learning

Active Learning for Regression Using Greedy Sampling

1 code implementation8 Aug 2018 Dongrui Wu, Chin-Teng Lin, Jian Huang

Active learning for regression (ALR) is a methodology to reduce the number of labeled samples, by selecting the most beneficial ones to label, instead of random selection.

Active Learning EEG +1

Transfer Learning Enhanced Common Spatial Pattern Filtering for Brain Computer Interfaces (BCIs): Overview and a New Approach

no code implementations8 Aug 2018 He He, Dongrui Wu

The electroencephalogram (EEG) is the most widely used input for brain computer interfaces (BCIs), and common spatial pattern (CSP) is frequently used to spatially filter it to increase its signal-to-noise ratio.

EEG General Classification +2

Transfer Learning for Brain-Computer Interfaces: A Euclidean Space Data Alignment Approach

1 code implementation8 Aug 2018 He He, Dongrui Wu

Our approach has three desirable properties: 1) it aligns the EEG trials directly in the Euclidean space, and any signal processing, feature extraction and machine learning algorithms can then be applied to the aligned trials; 2) its computational cost is very low; and, 3) it is unsupervised and does not need any label information from the new subject.

EEG General Classification +2

Affect Estimation in 3D Space Using Multi-Task Active Learning for Regression

no code implementations8 Aug 2018 Dongrui Wu, Jian Huang

Acquisition of labeled training samples for affective computing is usually costly and time-consuming, as affects are intrinsically subjective, subtle and uncertain, and hence multiple human assessors are needed to evaluate each affective sample.

Active Learning regression

Multi-View Fuzzy Logic System with the Cooperation between Visible and Hidden Views

no code implementations23 Jul 2018 Te Zhang, Zhaohong Deng, Dongrui Wu, Shitong Wang

Multi-view datasets are frequently encountered in learning tasks, such as web data mining and multimedia information analysis.

MULTI-VIEW LEARNING

Offline EEG-Based Driver Drowsiness Estimation Using Enhanced Batch-Mode Active Learning (EBMAL) for Regression

no code implementations12 May 2018 Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance, Chin-Teng Lin

There are many important regression problems in real-world brain-computer interface (BCI) applications, e. g., driver drowsiness estimation from EEG signals.

Active Learning Brain Computer Interface +2

Active Semi-supervised Transfer Learning (ASTL) for Offline BCI Calibration

no code implementations12 May 2018 Dongrui Wu

Single-trial classification of event-related potentials in electroencephalogram (EEG) signals is a very important paradigm of brain-computer interface (BCI).

Active Learning Brain Computer Interface +3

Pool-Based Sequential Active Learning for Regression

1 code implementation12 May 2018 Dongrui Wu

Given a pool of unlabeled samples, it tries to select the most useful ones to label so that a model built from them can achieve the best possible performance.

Active Learning Informativeness +1

OMG - Emotion Challenge Solution

no code implementations30 Apr 2018 Yuqi Cui, Xiao Zhang, Yang Wang, Chenfeng Guo, Dongrui Wu

This short paper describes our solution to the 2018 IEEE World Congress on Computational Intelligence One-Minute Gradual-Emotional Behavior Challenge, whose goal was to estimate continuous arousal and valence values from short videos.

regression

EEG-Based User Reaction Time Estimation Using Riemannian Geometry Features

no code implementations27 Apr 2017 Dongrui Wu, Brent J. Lance, Vernon J. Lawhern, Stephen Gordon, Tzyy-Ping Jung, Chin-Teng Lin

Riemannian geometry has been successfully used in many brain-computer interface (BCI) classification problems and demonstrated superior performance.

Brain Computer Interface EEG +1

Driver Drowsiness Estimation from EEG Signals Using Online Weighted Adaptation Regularization for Regression (OwARR)

no code implementations9 Feb 2017 Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance, Chin-Teng Lin

By integrating fuzzy sets with domain adaptation, we propose a novel online weighted adaptation regularization for regression (OwARR) algorithm to reduce the amount of subject-specific calibration data, and also a source domain selection (SDS) approach to save about half of the computational cost of OwARR.

Domain Adaptation EEG +2

Online and Offline Domain Adaptation for Reducing BCI Calibration Effort

no code implementations9 Feb 2017 Dongrui Wu

This paper proposes both online and offline weighted adaptation regularization (wAR) algorithms to reduce this calibration effort, i. e., to minimize the amount of labeled subject-specific EEG data required in BCI calibration, and hence to increase the utility of the BCI system.

Domain Adaptation EEG +1

Switching EEG Headsets Made Easy: Reducing Offline Calibration Effort Using Active Weighted Adaptation Regularization

no code implementations9 Feb 2017 Dongrui Wu, Vernon J. Lawhern, W. David Hairston, Brent J. Lance

wAR makes use of labeled data from the previous headset and handles class-imbalance, and active learning selects the most informative samples from the new headset to label.

Active Learning Brain Computer Interface +4

Spatial Filtering for EEG-Based Regression Problems in Brain-Computer Interface (BCI)

no code implementations9 Feb 2017 Dongrui Wu, Jung-Tai King, Chun-Hsiang Chuang, Chin-Teng Lin, Tzyy-Ping Jung

Electroencephalogram (EEG) signals are frequently used in brain-computer interfaces (BCIs), but they are easily contaminated by artifacts and noises, so preprocessing must be done before they are fed into a machine learning algorithm for classification or regression.

Brain Computer Interface Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.