Search Results for author: Defang Chen

Found 21 papers, 13 papers with code

Knowledge Translation: A New Pathway for Model Compression

1 code implementation11 Jan 2024 Wujie Sun, Defang Chen, Jiawei Chen, Yan Feng, Chun Chen, Can Wang

Deep learning has witnessed significant advancements in recent years at the cost of increasing training, inference, and model storage overhead.

Data Augmentation Model Compression +1

Fast ODE-based Sampling for Diffusion Models in Around 5 Steps

2 code implementations30 Nov 2023 Zhenyu Zhou, Defang Chen, Can Wang, Chun Chen

Sampling from diffusion models can be treated as solving the corresponding ordinary differential equations (ODEs), with the aim of obtaining an accurate solution with as few number of function evaluations (NFE) as possible.

Image Generation

Customizing Synthetic Data for Data-Free Student Learning

1 code implementation10 Jul 2023 Shiya Luo, Defang Chen, Can Wang

Existing works generally synthesize data from the pre-trained teacher model to replace the original training data for student learning.

Data-free Knowledge Distillation

Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning

1 code implementation11 Jun 2023 Hailin Zhang, Defang Chen, Can Wang

Multi-Teacher knowledge distillation provides students with additional supervision from multiple pre-trained teachers with diverse information sources.

Knowledge Distillation Meta-Learning

A Geometric Perspective on Diffusion Models

no code implementations31 May 2023 Defang Chen, Zhenyu Zhou, Jian-Ping Mei, Chunhua Shen, Chun Chen, Can Wang

Recent years have witnessed significant progress in developing effective training and fast sampling techniques for diffusion models.

Denoising

Accelerating Diffusion Sampling with Classifier-based Feature Distillation

1 code implementation22 Nov 2022 Wujie Sun, Defang Chen, Can Wang, Deshi Ye, Yan Feng, Chun Chen

Instead of aligning output images, we distill teacher's sharpened feature distribution into the student with a dataset-independent classifier, making the student focus on those important features to improve performance.

Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision

no code implementations25 Oct 2022 Jiongyu Guo, Defang Chen, Can Wang

Alignahead++ transfers structure and feature information in a student layer to the previous layer of another simultaneously trained student model in an alternating training procedure.

Knowledge Distillation Model Compression

Label-Efficient Domain Generalization via Collaborative Exploration and Generalization

no code implementations7 Aug 2022 Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin

To escape from the dilemma between domain generalization and annotation costs, in this paper, we introduce a novel task named label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains.

Domain Generalization

Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation

no code implementations7 Jun 2022 Zhehui Zhou, Defang Chen, Can Wang, Yan Feng, Chun Chen

Iteratively incorporating and accumulating iteration-based semantic information enables the low-dimensional model to be more expressive for better link prediction in KGs.

Knowledge Distillation Knowledge Graph Embedding +2

Alignahead: Online Cross-Layer Knowledge Extraction on Graph Neural Networks

1 code implementation5 May 2022 Jiongyu Guo, Defang Chen, Can Wang

Existing knowledge distillation methods on graph neural networks (GNNs) are almost offline, where the student model extracts knowledge from a powerful teacher model to improve its performance.

Knowledge Distillation

Knowledge Distillation with the Reused Teacher Classifier

1 code implementation CVPR 2022 Defang Chen, Jian-Ping Mei, Hailin Zhang, Can Wang, Yan Feng, Chun Chen

Knowledge distillation aims to compress a powerful yet cumbersome teacher model into a lightweight student model without much sacrifice of performance.

Knowledge Distillation

Knowledge Distillation with Deep Supervision

no code implementations16 Feb 2022 Shiya Luo, Defang Chen, Can Wang

Knowledge distillation aims to enhance the performance of a lightweight student model by exploiting the knowledge from a pre-trained cumbersome teacher model.

Knowledge Distillation Transfer Learning

Confidence-Aware Multi-Teacher Knowledge Distillation

1 code implementation30 Dec 2021 Hailin Zhang, Defang Chen, Can Wang

Knowledge distillation is initially introduced to utilize additional supervision from a single teacher model for the student model training.

Knowledge Distillation Transfer Learning

Online Adversarial Distillation for Graph Neural Networks

no code implementations28 Dec 2021 Can Wang, Zhe Wang, Defang Chen, Sheng Zhou, Yan Feng, Chun Chen

However, its effect on graph neural networks is less than satisfactory since the graph topology and node attributes are likely to change in a dynamic way and in this case a static teacher model is insufficient in guiding student training.

Knowledge Distillation

Collaborative Semantic Aggregation and Calibration for Federated Domain Generalization

1 code implementation13 Oct 2021 Junkun Yuan, Xu Ma, Defang Chen, Fei Wu, Lanfen Lin, Kun Kuang

Domain generalization (DG) aims to learn from multiple known source domains a model that can generalize well to unknown target domains.

Domain Generalization

Domain-Specific Bias Filtering for Single Labeled Domain Generalization

1 code implementation2 Oct 2021 Junkun Yuan, Xu Ma, Defang Chen, Kun Kuang, Fei Wu, Lanfen Lin

In this paper, we investigate a Single Labeled Domain Generalization (SLDG) task with only one source domain being labeled, which is more practical and challenging than the CDG task.

Domain Generalization

Distilling Holistic Knowledge with Graph Neural Networks

1 code implementation ICCV 2021 Sheng Zhou, Yucheng Wang, Defang Chen, Jiawei Chen, Xin Wang, Can Wang, Jiajun Bu

The holistic knowledge is represented as a unified graph-based embedding by aggregating individual knowledge from relational neighborhood samples with graph neural networks, the student network is learned by distilling the holistic knowledge in a contrastive manner.

Knowledge Distillation

Cross-Layer Distillation with Semantic Calibration

2 code implementations6 Dec 2020 Defang Chen, Jian-Ping Mei, Yuan Zhang, Can Wang, Yan Feng, Chun Chen

Knowledge distillation is a technique to enhance the generalization ability of a student model by exploiting outputs from a teacher model.

Knowledge Distillation Transfer Learning

Online Knowledge Distillation via Multi-branch Diversity Enhancement

no code implementations2 Oct 2020 Zheng Li, Ying Huang, Defang Chen, Tianren Luo, Ning Cai, Zhigeng Pan

Extensive experiments proved that our method significantly enhances the diversity among student models and brings better distillation performance.

Image Classification Knowledge Distillation

Online Knowledge Distillation with Diverse Peers

2 code implementations1 Dec 2019 Defang Chen, Jian-Ping Mei, Can Wang, Yan Feng, Chun Chen

The second-level distillation is performed to transfer the knowledge in the ensemble of auxiliary peers further to the group leader, i. e., the model used for inference.

Knowledge Distillation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.