no code implementations • ECCV 2020 • Xiangyu He, Zitao Mo, Ke Cheng, Weixiang Xu, Qinghao Hu, Peisong Wang, Qingshan Liu, Jian Cheng
The matrix composed of basis vectors is referred to as the proxy matrix, and auxiliary variables serve as the coefficients of this linear combination.
1 code implementation • ICML 2020 • Peisong Wang, Qiang Chen, Xiangyu He, Jian Cheng
Network quantization is essential for deploying deep models to IoT devices due to the high efficiency, no matter on special hardware like TPU or general hardware like CPU and GPU.
1 code implementation • 25 Nov 2023 • Yingying Deng, Xiangyu He, Fan Tang, WeiMing Dong
Despite the remarkable progress in image style transfer, formulating style in the context of art is inherently subjective and challenging.
1 code implementation • 13 Jun 2022 • Yanpeng Sun, Qiang Chen, Xiangyu He, Jian Wang, Haocheng Feng, Junyu Han, Errui Ding, Jian Cheng, Zechao Li, Jingdong Wang
In this paper, we rethink the paradigm and explore a new regime: {\em fine-tuning a small part of parameters in the backbone}.
Ranked #8 on Few-Shot Semantic Segmentation on COCO-20i (1-shot)
1 code implementation • 4 Apr 2022 • Weixiang Xu, Xiangyu He, Tianli Zhao, Qinghao Hu, Peisong Wang, Jian Cheng
The latest STTN shows that ResNet-18 with ternary weights and ternary activations achieves up to 68. 2% Top-1 accuracy on ImageNet.
no code implementations • 25 Jan 2022 • Xiangyu He, Jian Cheng
Super-resolution as an ill-posed problem has many high-resolution candidates for a low-resolution input.
1 code implementation • CVPR 2022 • Jiahao Lu, Xi Sheryl Zhang, Tianli Zhao, Xiangyu He, Jian Cheng
Showing how vision Transformers are at the risk of privacy leakage via gradients, we urge the significance of designing privacy-safer Transformer models and defending schemes.
no code implementations • 12 Oct 2021 • Weixiang Xu, Qiang Chen, Xiangyu He, Peisong Wang, Jian Cheng
Binary Neural Networks (BNNs) rely on a real-valued auxiliary variable W to help binary training.
no code implementations • 1 Sep 2021 • Tianli Zhao, Qinghao Hu, Xiangyu He, Weixiang Xu, Jiaxing Wang, Cong Leng, Jian Cheng
Acceleration of deep neural networks to meet a specific latency constraint is essential for their deployment on mobile devices.
no code implementations • 21 Jan 2021 • Xiangyu He, Qinghao Hu, Peisong Wang, Jian Cheng
Convolutional neural networks are able to learn realistic image priors from numerous training samples in low-level image generation and restoration.
1 code implementation • ICCV 2021 • Fanrong Li, Gang Li, Xiangyu He, Jian Cheng
In particular, dynamic dual gating can provide 59. 7% saving in computing of ResNet50 with 76. 41% top-1 accuracy on ImageNet, which has advanced the state-of-the-art.
3 code implementations • 15 Sep 2020 • Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng, Jian Cheng, Guangyang Wu, Wenyi Wang, Xiaohong Liu, Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, Chao Dong, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Xiaochuan Li, Zhiqiang Lang, Jiangtao Nie, Wei Wei, Lei Zhang, Abdul Muqeet, Jiwon Hwang, Subin Yang, JungHeum Kang, Sung-Ho Bae, Yongwoo Kim, Geun-Woo Jeon, Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee, Steven Marty, Eric Marty, Dongliang Xiong, Siang Chen, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Haicheng Wang, Vineeth Bhaskara, Alex Levinshtein, Stavros Tsogkas, Allan Jepson, Xiangzhen Kong, Tongtong Zhao, Shanshan Zhao, Hrishikesh P. S, Densen Puthussery, Jiji C. V, Nan Nan, Shuai Liu, Jie Cai, Zibo Meng, Jiaming Ding, Chiu Man Ho, Xuehui Wang, Qiong Yan, Yuzhi Zhao, Long Chen, Jiangtao Zhang, Xiaotong Luo, Liang Chen, Yanyun Qu, Long Sun, Wenhao Wang, Zhenbing Liu, Rushi Lan, Rao Muhammad Umer, Christian Micheloni
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.
1 code implementation • 13 Nov 2019 • Xiangyu He, Zitao Mo, Qiang Chen, Anda Cheng, Peisong Wang, Jian Cheng
Many successful learning targets such as minimizing dice loss and cross-entropy loss have enabled unprecedented breakthroughs in segmentation tasks.
Ranked #35 on Semantic Segmentation on PASCAL Context
1 code implementation • 19 Oct 2019 • Qiang Chen, Anda Cheng, Xiangyu He, Peisong Wang, Jian Cheng
Object location is fundamental to panoptic segmentation as it is related to all things and stuff in the image scene.
Ranked #17 on Panoptic Segmentation on COCO test-dev
no code implementations • 24 Sep 2019 • Fanrong Li, Zitao Mo, Peisong Wang, Zejian Liu, Jiayun Zhang, Gang Li, Qinghao Hu, Xiangyu He, Cong Leng, Yang Zhang, Jian Cheng
As a case study, we evaluate our object detection system on a real-world surveillance video with input size of 512x512, and it turns out that the system can achieve an inference speed of 18 fps at the cost of 6. 9W (with display) with an mAP of 66. 4 verified on the PASCAL VOC 2012 dataset.
1 code implementation • 23 Jul 2019 • Xiangyu He, Ke Cheng, Qiang Chen, Qinghao Hu, Peisong Wang, Jian Cheng
Long-range dependencies modeling, widely used in capturing spatiotemporal correlation, has shown to be effective in CNN dominated computer vision tasks.
Ranked #208 on Object Detection on COCO test-dev
no code implementations • CVPR 2019 • Xiangyu He, Peisong Wang, Jian Cheng
Hashing based approximate nearest neighbor search embeds high dimensional data to compact binary codes, which enables efficient similarity search and storage.
no code implementations • CVPR 2019 • Xiangyu He, Zitao Mo, Peisong Wang, Yang Liu, Mingyuan Yang, Jian Cheng
By casting the numerical schemes in ODE as blueprints, we derive two types of network structures: LF-block and RK-block, which correspond to the Leapfrog method and Runge-Kutta method in numerical ordinary differential equations.
no code implementations • ECCV 2018 • Xiangyu He, Jian Cheng
Through quantization or pruning, most methods may compress a large number of parameters but ignore the core role in performance degradation, which is the Gaussian conjugate prior induced by batch normalization.