Search Results for author: Linhang Cai

Found 7 papers, 5 papers with code

MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition

1 code implementation11 Aug 2022 Chuanguang Yang, Zhulin An, Helong Zhou, Linhang Cai, Xiang Zhi, Jiwen Wu, Yongjun Xu, Qian Zhang

MixSKD mutually distills feature maps and probability distributions between the random pair of original images and their mixup images in a meaningful way.

Data Augmentation Image Classification +5

Prior Gradient Mask Guided Pruning-Aware Fine-Tuning

1 code implementation AAAI 2022 Linhang Cai, Zhulin An, Chuanguang Yang, Yangchun Yan, Yongjun Xu

In detail, the proposed PGMPF selectively suppresses the gradient of those ”unimportant” parameters via a prior gradient mask generated by the pruning criterion during fine-tuning.

Image Classification

Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution

1 code implementation7 Sep 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

Each auxiliary branch is guided to learn self-supervision augmented task and distill this distribution from teacher to student.

Image Classification Knowledge Distillation +3

Hierarchical Self-supervised Augmented Knowledge Distillation

1 code implementation29 Jul 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

We therefore adopt an alternative self-supervised augmented task to guide the network to learn the joint distribution of the original recognition task and self-supervised auxiliary task.

Knowledge Distillation Representation Learning

Mutual Contrastive Learning for Visual Representation Learning

1 code implementation26 Apr 2021 Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu

We present a collaborative learning method called Mutual Contrastive Learning (MCL) for general visual representation learning.

Contrastive Learning Few-Shot Learning +5

GHFP: Gradually Hard Filter Pruning

no code implementations6 Nov 2020 Linhang Cai, Zhulin An, Yongjun Xu

Filter pruning is widely used to reduce the computation of deep learning, enabling the deployment of Deep Neural Networks (DNNs) in resource-limited devices.

Softer Pruning, Incremental Regularization

no code implementations19 Oct 2020 Linhang Cai, Zhulin An, Chuanguang Yang, Yongjun Xu

Network pruning is widely used to compress Deep Neural Networks (DNNs).

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.