Search Results for author: Pengbo Wang

Found 3 papers, 0 papers with code

Distillation Matters: Empowering Sequential Recommenders to Match the Performance of Large Language Model

no code implementations1 May 2024 Yu Cui, Feng Liu, Pengbo Wang, Bohao Wang, Heng Tang, Yi Wan, Jun Wang, Jiawei Chen

Owing to their powerful semantic reasoning capabilities, Large Language Models (LLMs) have been effectively utilized as recommenders, achieving impressive performance.

Knowledge Distillation Language Modelling +1

Quick and Reliable LoRa Physical-layer Data Aggregation through Multi-Packet Reception

no code implementations13 Dec 2022 Lizhao You, Zhirong Tang, Pengbo Wang, Zhaorui Wang, Haipeng Dai, Liqun Fu

Trace-driven simulation results show that the symbol demodulation algorithm outperforms the state-of-the-art MPR decoder by 5. 3$\times$ in terms of physical-layer throughput, and the soft decoder is more robust to unavoidable adverse phase misalignment and estimation error in practice.

Decoder

Cannot find the paper you are looking for? You can Submit a new open access paper.