Search Results for author: De-Chuan Zhan

Found 64 papers, 37 papers with code

SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion

1 code implementation22 Apr 2024 Lu Han, Xu-Yang Chen, Han-Jia Ye, De-Chuan Zhan

Multivariate time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare.

Multivariate Time Series Forecasting Time Series

TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen

no code implementations16 Apr 2024 Da-Wei Zhou, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan

The era of pre-trained models has ushered in a wealth of new insights for the machine learning community.

Incremental Learning Novel Class Discovery

MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes

no code implementations14 Apr 2024 Xin-Chun Li, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, Yang Yang, De-Chuan Zhan

For better model personalization, we point out that the hard-won personalized models are not well exploited and propose "inherited private model" to store the personalization experience.

Federated Learning

DIDA: Denoised Imitation Learning based on Domain Adaptation

no code implementations4 Apr 2024 Kaichen Huang, Hai-Hang Sun, Shenghua Wan, Minghao Shao, Shuai Feng, Le Gan, De-Chuan Zhan

Imitating skills from low-quality datasets, such as sub-optimal demonstrations and observations with distractors, is common in real-world applications.

Domain Adaptation Imitation Learning

SENSOR: Imitate Third-Person Expert's Behaviors via Active Sensoring

no code implementations4 Apr 2024 Kaichen Huang, Minghao Shao, Shenghua Wan, Hai-Hang Sun, Shuai Feng, Le Gan, De-Chuan Zhan

In many real-world visual Imitation Learning (IL) scenarios, there is a misalignment between the agent's and the expert's perspectives, which might lead to the failure of imitation.

Imitation Learning

Bridge the Modality and Capacity Gaps in Vision-Language Model Selection

no code implementations20 Mar 2024 Chao Yi, De-Chuan Zhan, Han-Jia Ye

It then uses this matrix to transfer useful statistics of VLMs from open-source datasets to the target dataset for bridging those two gaps and enhancing the VLM's capacity estimation for VLM selection.

Capacity Estimation Image Classification +3

Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

1 code implementation18 Mar 2024 Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan

Despite the strong performance of Pre-Trained Models (PTMs) in CIL, a critical issue persists: learning new classes often results in the overwriting of old ones.

Class Incremental Learning Decision Making +1

AD3: Implicit Action is the Key for World Models to Distinguish the Diverse Visual Distractors

no code implementations15 Mar 2024 Yucen Wang, Shenghua Wan, Le Gan, Shuai Feng, De-Chuan Zhan

Model-based methods have significantly contributed to distinguishing task-irrelevant distractors for visual control.

Continual Learning with Pre-Trained Models: A Survey

2 code implementations29 Jan 2024 Da-Wei Zhou, Hai-Long Sun, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan

Nowadays, real-world applications often face streaming data, which requires the learning system to absorb new knowledge as data evolves.

Continual Learning Fairness

Twice Class Bias Correction for Imbalanced Semi-Supervised Learning

no code implementations27 Dec 2023 Lan Li, Bowen Tao, Lu Han, De-Chuan Zhan, Han-Jia Ye

Differing from traditional semi-supervised learning, class-imbalanced semi-supervised learning presents two distinct challenges: (1) The imbalanced distribution of training samples leads to model bias towards certain classes, and (2) the distribution of unlabeled samples is unknown and potentially distinct from that of labeled samples, which further contributes to class bias in the pseudo-labels during training.

CLAF: Contrastive Learning with Augmented Features for Imbalanced Semi-Supervised Learning

no code implementations15 Dec 2023 Bowen Tao, Lan Li, Xin-Chun Li, De-Chuan Zhan

For each pseudo-labeled sample, we select positive and negative samples from labeled data instead of unlabeled data to compute contrastive loss.

Contrastive Learning Image Classification

Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration

1 code implementation NeurIPS 2023 Qi-Wei Wang, Da-Wei Zhou, Yi-Kai Zhang, De-Chuan Zhan, Han-Jia Ye

In this Few-Shot Class-Incremental Learning (FSCIL) scenario, existing methods either introduce extra learnable components or rely on a frozen feature extractor to mitigate catastrophic forgetting and overfitting problems.

Few-Shot Class-Incremental Learning Few-Shot Learning +3

Learning Robust Precipitation Forecaster by Temporal Frame Interpolation

1 code implementation30 Nov 2023 Lu Han, Xu-Yang Chen, Han-Jia Ye, De-Chuan Zhan

This achievement not only underscores the effectiveness of our methodologies but also establishes a new standard for deep learning applications in weather forecasting.

Precipitation Forecasting Transfer Learning +1

Unlocking the Transferability of Tokens in Deep Models for Tabular Data

no code implementations23 Oct 2023 Qi-Le Zhou, Han-Jia Ye, Le-Ye Wang, De-Chuan Zhan

Fine-tuning a pre-trained deep neural network has become a successful paradigm in various machine learning tasks.

Transfer Learning

RE-SORT: Removing Spurious Correlation in Multilevel Interaction for CTR Prediction

1 code implementation26 Sep 2023 Songli Wu, Liang Du, Jia-Qi Yang, Yuai Wang, De-Chuan Zhan, Shuang Zhao, Zixun Sun

Click-through rate (CTR) prediction is a critical task in recommendation systems, serving as the ultimate filtering step to sort items for a user.

Click-Through Rate Prediction Recommendation Systems +1

PILOT: A Pre-Trained Model-Based Continual Learning Toolbox

1 code implementation13 Sep 2023 Hai-Long Sun, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

While traditional machine learning can effectively tackle a wide range of problems, it primarily operates within a closed-world setting, which presents limitations when dealing with streaming data.

Class Incremental Learning Incremental Learning

ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model Reuse

1 code implementation17 Aug 2023 Yi-Kai Zhang, Lu Ren, Chao Yi, Qi-Wei Wang, De-Chuan Zhan, Han-Jia Ye

The rapid expansion of foundation pre-trained models and their fine-tuned counterparts has significantly contributed to the advancement of machine learning.

Streaming CTR Prediction: Rethinking Recommendation Task for Real-World Streaming Data

no code implementations14 Jul 2023 Qi-Wei Wang, Hongyu Lu, Yu Chen, Da-Wei Zhou, De-Chuan Zhan, Ming Chen, Han-Jia Ye

The Click-Through Rate (CTR) prediction task is critical in industrial recommender systems, where models are usually deployed on dynamic streaming data in practical applications.

Click-Through Rate Prediction Recommendation Systems

SeMAIL: Eliminating Distractors in Visual Imitation via Separated Models

no code implementations19 Jun 2023 Shenghua Wan, Yucen Wang, Minghao Shao, Ruying Chen, De-Chuan Zhan

Model-based imitation learning (MBIL) is a popular reinforcement learning method that improves sample efficiency on high-dimension input sources, such as images and videos.

Imitation Learning

Learning without Forgetting for Vision-Language Models

no code implementations30 May 2023 Da-Wei Zhou, Yuanhan Zhang, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

While traditional CIL methods focus on visual information to grasp core features, recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations with the aid of textual information.

Class Incremental Learning Incremental Learning

MrTF: Model Refinery for Transductive Federated Learning

1 code implementation7 May 2023 Xin-Chun Li, Yang Yang, De-Chuan Zhan

We propose a novel learning paradigm named transductive federated learning (TFL) to simultaneously consider the structural information of the to-be-inferred data.

Federated Learning

Preserving Locality in Vision Transformers for Class Incremental Learning

1 code implementation14 Apr 2023 Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

In this paper, we encourage the model to preserve more local information as the training procedure goes on and devise a Locality-Preserved Attention (LPA) layer to emphasize the importance of local features.

Class Incremental Learning Incremental Learning

The Capacity and Robustness Trade-off: Revisiting the Channel Independent Strategy for Multivariate Time Series Forecasting

1 code implementation11 Apr 2023 Lu Han, Han-Jia Ye, De-Chuan Zhan

Our results conclude that the CD approach has higher capacity but often lacks robustness to accurately predict distributionally drifted time series.

Multivariate Time Series Forecasting Time Series

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

2 code implementations13 Mar 2023 Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

ADAM is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM's generalizability and adapted model's adaptivity.

Class Incremental Learning Incremental Learning +1

Deep Class-Incremental Learning: A Survey

3 code implementations7 Feb 2023 Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

Deep models, e. g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world.

Class Incremental Learning Image Classification +1

On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning

no code implementations15 Jan 2023 Lu Han, Han-Jia Ye, De-Chuan Zhan

Based on the findings, we propose to improve PL in class-mismatched SSL with two components -- Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC).

Clustering

Self-Motivated Multi-Agent Exploration

1 code implementation5 Jan 2023 Shaowei Zhang, Jiahan Cao, Lei Yuan, Yang Yu, De-Chuan Zhan

In cooperative multi-agent reinforcement learning (CMARL), it is critical for agents to achieve a balance between self-exploration and team collaboration.

SMAC+ Starcraft +1

Learning Debiased Representations via Conditional Attribute Interpolation

1 code implementation CVPR 2023 Yi-Kai Zhang, Qi-Wei Wang, De-Chuan Zhan, Han-Jia Ye

When a dataset is biased, i. e., most samples have attributes spuriously correlated with the target label, a Deep Neural Network (DNN) is prone to make predictions by the "unintended" attribute, especially if it is easier to learn.

Attribute Metric Learning

Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again

no code implementations10 Oct 2022 Xin-Chun Li, Wen-Shu Fan, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, De-Chuan Zhan

Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities.

Knowledge Distillation

Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems

1 code implementation1 Jun 2022 Jia-Qi Yang, De-Chuan Zhan

We propose a generalized delayed feedback model (GDFM) that unifies both post-click behaviors and early conversions as stochastic post-click information, which could be utilized to train GDFM in a streaming manner efficiently.

Recommendation Systems

A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning

2 code implementations26 May 2022 Da-Wei Zhou, Qi-Wei Wang, Han-Jia Ye, De-Chuan Zhan

We find that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work, especially for the case with limited memory budgets.

Class Incremental Learning Incremental Learning

Generalized Knowledge Distillation via Relationship Matching

1 code implementation4 May 2022 Han-Jia Ye, Su Lu, De-Chuan Zhan

Instead of enforcing the teacher to work on the same task as the student, we borrow the knowledge from a teacher trained from a general label space -- in this "Generalized Knowledge Distillation (GKD)", the classes of the teacher and the student may be the same, completely different, or partially overlapped.

Few-Shot Learning Incremental Learning +1

Selective Cross-Task Distillation

no code implementations25 Apr 2022 Su Lu, Han-Jia Ye, De-Chuan Zhan

Our method reuses cross-task knowledge from a distinct label space and efficiently assesses teachers without enumerating the model repository.

Knowledge Distillation

Identifying Ambiguous Similarity Conditions via Semantic Matching

1 code implementation CVPR 2022 Han-Jia Ye, Yi Shi, De-Chuan Zhan

To this end, we introduce a novel evaluation criterion by predicting the comparison's correctness after assigning the learned embeddings to their optimal conditions, which measures how much WS-CSL could cover latent semantics as the supervised model.

Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks

1 code implementation31 Mar 2022 Da-Wei Zhou, Han-Jia Ye, Liang Ma, Di Xie, ShiLiang Pu, De-Chuan Zhan

In this work, we propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT), which synthesizes fake FSCIL tasks from the base dataset.

Few-Shot Class-Incremental Learning Incremental Learning +1

Forward Compatible Few-Shot Class-Incremental Learning

1 code implementation CVPR 2022 Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, Liang Ma, ShiLiang Pu, De-Chuan Zhan

Forward compatibility requires future new classes to be easily incorporated into the current model based on the current stage data, and we seek to realize it by reserving embedding space for future new classes.

Few-Shot Class-Incremental Learning Incremental Learning

PyCIL: A Python Toolbox for Class-Incremental Learning

1 code implementation23 Dec 2021 Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, De-Chuan Zhan

Traditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process.

BIG-bench Machine Learning Class Incremental Learning +1

RID-Noise: Towards Robust Inverse Design under Noisy Environments

1 code implementation7 Dec 2021 Jia-Qi Yang, Ke-Bin Fan, Hao Ma, De-Chuan Zhan

We also define a sample-wise weight, which can be used in the maximum weighted likelihood estimation of an inverse model based on a cINN.

Robust Design

Co-Transport for Class-Incremental Learning

2 code implementations27 Jul 2021 Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

As a result, we propose CO-transport for class Incremental Learning (COIL), which learns to relate across incremental tasks with the class-wise semantic relationship.

Class Incremental Learning Incremental Learning

Preliminary Steps Towards Federated Sentiment Classification

no code implementations26 Jul 2021 Xin-Chun Li, Lan Li, De-Chuan Zhan, Yunfeng Shao, Bingshuai Li, Shaoming Song

Automatically mining sentiment tendency contained in natural language is a fundamental research to some artificial intelligent applications, where solutions alternate with challenges.

Classification Dimensionality Reduction +4

Few-Shot Learning with a Strong Teacher

1 code implementation1 Jul 2021 Han-Jia Ye, Lu Ming, De-Chuan Zhan, Wei-Lun Chao

Many existing works take the meta-learning approach, constructing a few-shot learner that can learn from few-shot examples to generate a classifier.

Few-Shot Learning

Contextualizing Meta-Learning via Learning to Decompose

1 code implementation15 Jun 2021 Han-Jia Ye, Da-Wei Zhou, Lanqing Hong, Zhenguo Li, Xiu-Shen Wei, De-Chuan Zhan

To this end, we propose Learning to Decompose Network (LeadNet) to contextualize the meta-learned ``support-to-target'' strategy, leveraging the context of instances with one or mixed latent attributes in a support set.

Attribute Few-Shot Image Classification +1

Semi-Supervised Multi-Modal Multi-Instance Multi-Label Deep Network with Optimal Transport

no code implementations17 Apr 2021 Yang Yang, Zhao-Yang Fu, De-Chuan Zhan, Zhi-Bin Liu, Yuan Jiang

Moreover, we introduce the extrinsic unlabeled multi-modal multi-instance data, and propose the M3DNS, which considers the instance-level auto-encoder for single modality and modified bag-level optimal transport to strengthen the consistency among modalities.

Towards Enabling Meta-Learning from Target Models

1 code implementation NeurIPS 2021 Su Lu, Han-Jia Ye, Le Gan, De-Chuan Zhan

Different from $\mathcal{S}$/$\mathcal{Q}$ protocol, we can also evaluate a task-specific solver by comparing it to a target model $\mathcal{T}$, which is the optimal model for this task or a model that behaves well enough on this task ($\mathcal{S}$/$\mathcal{T}$ protocol).

Few-Shot Learning Inductive Bias +1

Few-Shot Action Recognition with Compromised Metric via Optimal Transport

no code implementations8 Apr 2021 Su Lu, Han-Jia Ye, De-Chuan Zhan

In detail, given two videos, we sample segments from them and cast the calculation of their distance as an optimal transport problem between two segment sequences.

Few-Shot action recognition Few Shot Action Recognition +2

Procrustean Training for Imbalanced Deep Learning

no code implementations ICCV 2021 Han-Jia Ye, De-Chuan Zhan, Wei-Lun Chao

To correct these wrong predictions, the neural network then must focus on pushing features of minor class data across the decision boundaries between major and minor classes, leading to much larger gradients for features of minor classes.

Attribute

Learning Placeholders for Open-Set Recognition

1 code implementation CVPR 2021 Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

To this end, we proposed to learn PlaceholdeRs for Open-SEt Recognition (Proser), which prepares for the unknown classes by allocating placeholders for both data and classifier.

Open Set Learning

Capturing Delayed Feedback in Conversion Rate Prediction via Elapsed-Time Sampling

1 code implementation6 Dec 2020 Jia-Qi Yang, Xiang Li, Shuguang Han, Tao Zhuang, De-Chuan Zhan, Xiaoyi Zeng, Bin Tong

To strike a balance in this trade-off, we propose Elapsed-Time Sampling Delayed Feedback Model (ES-DFM), which models the relationship between the observed conversion distribution and the true conversion distribution.

Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot Tasks

1 code implementation30 Nov 2020 Han-Jia Ye, Lu Han, De-Chuan Zhan

Meta-learning has become a practical approach towards few-shot image classification, where "a strategy to learn a classifier" is meta-learned on labeled base classes and can be applied to tasks with novel classes.

Unsupervised Few-Shot Image Classification Unsupervised Few-Shot Learning

Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning

1 code implementation6 Jan 2020 Han-Jia Ye, Hong-You Chen, De-Chuan Zhan, Wei-Lun Chao

Classifiers trained with class-imbalanced data are known to perform poorly on test data of the "minor" classes, of which we have insufficient training data.

Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning

1 code implementation7 Jun 2019 Han-Jia Ye, Hexiang Hu, De-Chuan Zhan

In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to learn about tail categories with few shots and simultaneously classify the head classes.

Few-Shot Learning Generalized Few-Shot Learning +2

Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions

5 code implementations CVPR 2020 Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, Fei Sha

Many few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels.

Few-Shot Image Classification Few-Shot Learning +3

Rectify Heterogeneous Models with Semantic Mapping

no code implementations ICML 2018 Han-Jia Ye, De-Chuan Zhan, Yuan Jiang, Zhi-Hua Zhou

On the way to the robust learner for real-world applications, there are still great challenges, including considering unknown environments with limited data.

What Makes Objects Similar: A Unified Multi-Metric Learning Approach

no code implementations NeurIPS 2016 Han-Jia Ye, De-Chuan Zhan, Xue-Min Si, Yuan Jiang, Zhi-Hua Zhou

In UM2L, a type of combination operator is introduced for distance characterization from multiple perspectives, and thus can introduce flexibilities for representing and utilizing both spatial and semantic linkages.

Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.