Search Results for author: Rongwei Lu

Found 2 papers, 0 papers with code

Retraining-free Model Quantization via One-Shot Weight-Coupling Learning

no code implementations3 Jan 2024 Chen Tang, Yuan Meng, Jiacheng Jiang, Shuzhao Xie, Rongwei Lu, Xinzhu Ma, Zhi Wang, Wenwu Zhu

Conversely, mixed-precision quantization (MPQ) is advocated to compress the model effectively by allocating heterogeneous bit-width for layers.

Model Compression Quantization

DAGC: Data-Volume-Aware Adaptive Sparsification Gradient Compression for Distributed Machine Learning in Mobile Computing

no code implementations13 Nov 2023 Rongwei Lu, Yutong Jiang, Yinan Mao, Chen Tang, Bin Chen, Laizhong Cui, Zhi Wang

Assigning varying compression ratios to workers with distinct data distributions and volumes is thus a promising solution.

Cannot find the paper you are looking for? You can Submit a new open access paper.