Search Results for author: Shanshan Lao

Found 6 papers, 4 papers with code

Masked Autoencoders Are Stronger Knowledge Distillers

no code implementations ICCV 2023 Shanshan Lao, Guanglu Song, Boxiao Liu, Yu Liu, Yujiu Yang

In MKD, random patches of the input image are masked, and the corresponding missing feature is recovered by forcing it to imitate the output of the teacher.

Knowledge Distillation object-detection +2

UniKD: Universal Knowledge Distillation for Mimicking Homogeneous or Heterogeneous Object Detectors

no code implementations ICCV 2023 Shanshan Lao, Guanglu Song, Boxiao Liu, Yu Liu, Yujiu Yang

Bridging this semantic gap now requires case-by-case algorithm design which is time-consuming and heavily relies on experienced adjustment.

Knowledge Distillation

Rethinking Knowledge Distillation via Cross-Entropy

1 code implementation22 Aug 2022 Zhendong Yang, Zhe Li, Yuan Gong, Tianke Zhang, Shanshan Lao, Chun Yuan, Yu Li

Furthermore, we smooth students' target output to treat it as the soft target for training without teachers and propose a teacher-free new KD loss (tf-NKD).

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.