Search Results for author: Misun Yu

Found 2 papers, 0 papers with code

Q-HyViT: Post-Training Quantization for Hybrid Vision Transformer with Bridge Block Reconstruction

no code implementations22 Mar 2023 Jemin Lee, Yongin Kwon, Jeman Park, Misun Yu, Sihyeong Park, Hwanjun Song

To overcome these challenges, we propose a new post-training quantization method, which is the first to quantize efficient hybrid ViTs (MobileViTv1, MobileViTv2, Mobile-Former, EfficientFormerV1, EfficientFormerV2) with a significant margin (an average improvement of 8. 32\% for 8-bit and 26. 02\% for 6-bit) compared to existing PTQ methods (EasyQuant, FQ-ViT, and PTQ4ViT).

Quantization

Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment

no code implementations10 Feb 2022 Jemin Lee, Misun Yu, Yongin Kwon, TaeHo Kim

To adopt convolutional neural networks (CNN) for a range of resource-constrained targets, it is necessary to compress the CNN models by performing quantization, whereby precision representation is converted to a lower bit representation.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.