Search Results for author: Lianwei Yang

Found 3 papers, 1 papers with code

BinaryViT: Towards Efficient and Accurate Binary Vision Transformers

no code implementations24 May 2023 Junrui Xiao, Zhikai Li, Lianwei Yang, Qingyi Gu

In this paper, we first argue empirically that the severe performance degradation is mainly caused by the weight oscillation in the binarization training and the information distortion in the activation of ViTs.

Binarization Quantization

Patch-wise Mixed-Precision Quantization of Vision Transformer

no code implementations11 May 2023 Junrui Xiao, Zhikai Li, Lianwei Yang, Qingyi Gu

As emerging hardware begins to support mixed bit-width arithmetic computation, mixed-precision quantization is widely used to reduce the complexity of neural networks.

Quantization

RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers

1 code implementation ICCV 2023 Zhikai Li, Junrui Xiao, Lianwei Yang, Qingyi Gu

Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique.

Model Compression Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.