1 code implementation • ICCV 2023 • Longrong Yang, Xianpan Zhou, XueWei Li, Liang Qiao, Zheyang Li, Ziwei Yang, Gaoang Wang, Xi Li
Thus, the optimum of the distillation loss does not necessarily lead to the optimal student classification scores for dense object detectors.
1 code implementation • CVPR 2023 • Wei Su, Peihan Miao, Huanzhang Dou, Gaoang Wang, Liang Qiao, Zheyang Li, Xi Li
The active perception can take expressions as priors to extract relevant visual features, which can effectively alleviate the mismatches.
no code implementations • 2 Mar 2023 • Huiqi Deng, Na Zou, Mengnan Du, Weifu Chen, Guocan Feng, Ziwei Yang, Zheyang Li, Quanshi Zhang
Various attribution methods have been developed to explain deep neural networks (DNNs) by inferring the attribution/importance/contribution score of each input variable to the final output.
no code implementations • CVPR 2023 • Chen Lin, Bo Peng, Zheyang Li, Wenming Tan, Ye Ren, Jun Xiao, ShiLiang Pu
To this end, we detach a sharpness term from the loss which reflects the impact of quantization noise.
1 code implementation • ICCV 2023 • Heng Zhao, Shenxing Wei, Dahu Shi, Wenming Tan, Zheyang Li, Ye Ren, Xing Wei, Yi Yang, ShiLiang Pu
Taking the symmetry properties of objects into consideration, we design a symmetry-aware matching loss to facilitate the learning of dense point-wise geometry features and improve the performance considerably.
1 code implementation • NIPS 2022 • Zheng Chuanyang, Zheyang Li, Kai Zhang, Zhi Yang, Wenming Tan, Jun Xiao, Ye Ren, ShiLiang Pu
In this paper, we introduce joint importance, which integrates essential structural-aware interactions between components for the first time, to perform collaborative pruning.
1 code implementation • 2 Aug 2022 • Qiming Yang, Kai Zhang, Chaoxiang Lan, Zhi Yang, Zheyang Li, Wenming Tan, Jun Xiao, ShiLiang Pu
To tackle these issues, we propose Unified Normalization (UN), which can speed up the inference by being fused with other linear operations and achieve comparable performance on par with LN.
1 code implementation • 16 Apr 2022 • Yulei Lu, Yawei Luo, Li Zhang, Zheyang Li, Yi Yang, Jun Xiao
A thriving trend for domain adaptive segmentation endeavors to generate the high-quality pseudo labels for target domain and retrain the segmentor on them.
no code implementations • 17 Jan 2022 • Chen Lin, Zheyang Li, Bo Peng, Haoji Hu, Wenming Tan, Ye Ren, ShiLiang Pu
This paper introduces a post-training quantization~(PTQ) method achieving highly efficient Convolutional Neural Network~ (CNN) quantization with high performance.
1 code implementation • 12 Aug 2021 • Ziwei Yang, Ruyi Zhang, Zhi Yang, Xubo Yang, Lei Wang, Zheyang Li
One-shot neural architecture search (NAS) applies weight-sharing supernet to reduce the unaffordable computation overhead of automated architecture designing.
1 code implementation • 12 Aug 2021 • Ruyi Zhang, Ziwei Yang, Zhi Yang, Xubo Yang, Lei Wang, Zheyang Li
To alleviate this problem, we propose a novel framework to train an accuracy predictor under few training samples.
no code implementations • ECCV 2018 • Bo Peng, Wenming Tan, Zheyang Li, Shun Zhang, Di Xie, ShiLiang Pu
In this paper we propose a novel decomposition method based on filter group approximation, which can significantly reduce the redundancy of deep convolutional neural networks (CNNs) while maintaining the majority of feature representation.