Search Results for author: Weichao Lan

Found 5 papers, 2 papers with code

Improve Knowledge Distillation via Label Revision and Data Selection

no code implementations3 Apr 2024 Weichao Lan, Yiu-ming Cheung, Qing Xu, Buhua Liu, Zhikai Hu, Mengke Li, Zhenghua Chen

In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model.

Knowledge Distillation Model Compression

Feature Fusion from Head to Tail for Long-Tailed Visual Recognition

1 code implementation12 Jun 2023 Mengke Li, Zhikai Hu, Yang Lu, Weichao Lan, Yiu-ming Cheung, Hui Huang

To rectify this issue, we propose to augment tail classes by grafting the diverse semantic information from head classes, referred to as head-to-tail fusion (H2T).

Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition

1 code implementation18 May 2023 Mengke Li, Yiu-ming Cheung, Yang Lu, Zhikai Hu, Weichao Lan, Hui Huang

Based on these perturbed features, two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.

Compact Neural Networks via Stacking Designed Basic Units

no code implementations3 May 2022 Weichao Lan, Yiu-ming Cheung, Juyong Jiang

To this end, this paper presents a new method termed TissueNet, which directly constructs compact neural networks with fewer weight parameters by independently stacking designed basic units, without requiring additional judgement criteria anymore.

Compressing Deep Convolutional Neural Networks by Stacking Low-dimensional Binary Convolution Filters

no code implementations6 Oct 2020 Weichao Lan, Liang Lan

One popular way to reduce the memory cost of deep CNN model is to train binary CNN where the weights in convolution filters are either 1 or -1 and therefore each weight can be efficiently stored using a single bit.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.