Search Results for author: Gongfan Fang

Found 15 papers, 12 papers with code

SlimSAM: 0.1% Data Makes Segment Anything Slim

2 code implementations8 Dec 2023 Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang

To address this challenging trade-off, we introduce SlimSAM, a novel data-efficient SAM compression method that achieves superior performance with extremely less training data.

DeepCache: Accelerating Diffusion Models for Free

2 code implementations1 Dec 2023 Xinyin Ma, Gongfan Fang, Xinchao Wang

Diffusion models have recently gained unprecedented attention in the field of image synthesis due to their remarkable generative capabilities.

Denoising Image Generation

LLM-Pruner: On the Structural Pruning of Large Language Models

1 code implementation NeurIPS 2023 Xinyin Ma, Gongfan Fang, Xinchao Wang

With LLM being a general-purpose task solver, we explore its compression in a task-agnostic manner, which aims to preserve the multi-task solving and language generation ability of the original LLM.

Text Generation Zero-Shot Learning

Structural Pruning for Diffusion Models

1 code implementation NeurIPS 2023 Gongfan Fang, Xinyin Ma, Xinchao Wang

Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs).

DepGraph: Towards Any Structural Pruning

1 code implementation CVPR 2023 Gongfan Fang, Xinyin Ma, Mingli Song, Michael Bi Mi, Xinchao Wang

Structural pruning enables model acceleration by removing structurally-grouped parameters from neural networks.

Network Pruning Neural Network Compression

Federated Selective Aggregation for Knowledge Amalgamation

1 code implementation27 Jul 2022 Donglin Xie, Ruonan Yu, Gongfan Fang, Jie Song, Zunlei Feng, Xinchao Wang, Li Sun, Mingli Song

The goal of FedSA is to train a student model for a new task with the help of several decentralized teachers, whose pre-training tasks and data are different and agnostic.

Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt

no code implementations16 May 2022 Xinyin Ma, Xinchao Wang, Gongfan Fang, Yongliang Shen, Weiming Lu

Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models.

Data-free Knowledge Distillation

Knowledge Amalgamation for Object Detection with Transformers

1 code implementation7 Mar 2022 Haofei Zhang, Feng Mao, Mengqi Xue, Gongfan Fang, Zunlei Feng, Jie Song, Mingli Song

Moreover, the transformer-based students excel in learning amalgamated knowledge, as they have mastered heterogeneous detection tasks rapidly and achieved superior or at least comparable performance to those of the teachers in their specializations.

Object object-detection +1

Up to 100$\times$ Faster Data-free Knowledge Distillation

2 code implementations12 Dec 2021 Gongfan Fang, Kanya Mo, Xinchao Wang, Jie Song, Shitao Bei, Haofei Zhang, Mingli Song

At the heart of our approach is a novel strategy to reuse the shared common features in training data so as to synthesize different data instances.

Data-free Knowledge Distillation

Contrastive Model Inversion for Data-Free Knowledge Distillation

3 code implementations18 May 2021 Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, Mingli Song

In this paper, we propose Contrastive Model Inversion~(CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue.

Contrastive Learning Data-free Knowledge Distillation

Impression Space from Deep Template Network

no code implementations10 Jul 2020 Gongfan Fang, Xinchao Wang, Haofei Zhang, Jie Song, Mingli Song

This network is referred to as the {\emph{Template Network}} because its filters will be used as templates to reconstruct images from the impression.

Image Generation Translation

Data-Free Adversarial Distillation

3 code implementations23 Dec 2019 Gongfan Fang, Jie Song, Chengchao Shen, Xinchao Wang, Da Chen, Mingli Song

Knowledge Distillation (KD) has made remarkable progress in the last few years and become a popular paradigm for model compression and knowledge transfer.

Knowledge Distillation Model Compression +2

Knowledge Amalgamation from Heterogeneous Networks by Common Feature Learning

2 code implementations24 Jun 2019 Sihui Luo, Xinchao Wang, Gongfan Fang, Yao Hu, Dapeng Tao, Mingli Song

An increasing number of well-trained deep networks have been released online by researchers and developers, enabling the community to reuse them in a plug-and-play way without accessing the training annotations.

Cannot find the paper you are looking for? You can Submit a new open access paper.