1 code implementation • 26 May 2023 • Lei Guan, Dongsheng Li, Yanqi Shi, Jian Meng
the future weights to update the DNN parameters, making the gradient-based optimizer achieve better convergence and generalization compared to the original optimizer without weight prediction.
1 code implementation • NIPS 2022 • Li Yang, Jian Meng, Jae-sun Seo, Deliang Fan
In this work, for the first time, we propose a novel alternating sparse training (AST) scheme to train multiple sparse sub-nets for dynamic inference without extra training cost compared to the case of training a single sparse model from scratch.
no code implementations • CVPR 2022 • Jian Meng, Li Yang, Jinwoo Shin, Deliang Fan, Jae-sun Seo
Contrastive learning (or its variants) has recently become a promising direction in the self-supervised learning domain, achieving similar performance as supervised learning with minimum fine-tuning.