Data-free Knowledge Distillation

25 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

kuluhan/pre-dfkd 9 Jan 2022

In particular, we design a Variational Autoencoder (VAE) with a training objective that is customized to learn the synthetic data representations optimally.

Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning

zhanglin-pku/fedftg CVPR 2022

Instead, we propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG), which relieves the issue of direct model aggregation.

CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing

Hao840/CDFKD-MFS 24 May 2022

To tackle this challenge, we propose a framework termed collaborative data-free knowledge distillation via multi-level feature sharing (CDFKD-MFS), which consists of a multi-header student module, an asymmetric adversarial data-free KD module, and an attention-based aggregation module.

Handling Data Heterogeneity in Federated Learning via Knowledge Distillation and Fusion

zxlovely/FedKF 23 Jul 2022

The key idea in FedKF is to let the server return the global knowledge to be fused with the local knowledge in each training round so that the local model can be regularized towards the global optima.

Dynamic Data-Free Knowledge Distillation by Easy-to-Hard Learning Strategy

ljrprocc/datafree 29 Aug 2022

Besides, CuDFKD adapts the generation target dynamically according to the status of student model.

Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint

skgyu/spaceshipnet CVPR 2023

Therefore, we propose mSARC to assure the student network can imitate not only the logit output but also the spatial activation region of the teacher network in order to alleviate the influence of unwanted noises in diverse synthetic images on distillation learning.

Synthetic data generation method for data-free knowledge distillation in regression neural networks

zhoutianxun/data_free_kd_regression 11 Jan 2023

Knowledge distillation is the technique of compressing a larger neural network, known as the teacher, into a smaller neural network, known as the student, while still trying to maintain the performance of the larger neural network as much as possible.

Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?

zhengli97/dm-kd 22 May 2023

Diffusion models have recently achieved astonishing performance in generating high-fidelity photo-realistic images.

Revisiting Data-Free Knowledge Distillation with Poisoned Teachers

illidanlab/abd 4 Jun 2023

Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model.

Customizing Synthetic Data for Data-Free Student Learning

luoshiya/csd 10 Jul 2023

Existing works generally synthesize data from the pre-trained teacher model to replace the original training data for student learning.