no code implementations • 10 Apr 2024 • Jinyu Song, Weitao You, Shuhui Shi, Shuxuan Guo, Lingyun Sun, Wei Wang
In this work, we first observe that most Chinese characters can be disassembled into frequently-reused components.
no code implementations • 18 Nov 2022 • Bicheng Guo, Shuxuan Guo, Miaojing Shi, Peng Chen, Shibo He, Jiming Chen, Kaicheng Yu
Differentiable architecture search (DARTS) has been a mainstream direction in automatic machine learning.
Ranked #10 on Neural Architecture Search on NAS-Bench-201, CIFAR-100
no code implementations • CVPR 2023 • Shuxuan Guo, Yinlin Hu, Jose M. Alvarez, Mathieu Salzmann
Knowledge distillation facilitates the training of a compact student network by using a deep teacher one.
1 code implementation • NeurIPS 2021 • Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann
Knowledge distillation constitutes a simple yet effective way to improve the performance of a compact student network by exploiting the knowledge of a more powerful teacher.
no code implementations • NeurIPS 2020 • Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann
As evidenced by our experiments, our approach outperforms both training the compact network from scratch and performing knowledge distillation from a teacher.