no code implementations • 14 Oct 2023 • Huatao Xu, Liying Han, Qirui Yang, Mo Li, Mani Srivastava
Recent developments in Large Language Models (LLMs) have demonstrated their remarkable capabilities across a range of tasks.
1 code implementation • 12 Nov 2022 • Linshan Jiang, Qun Song, Rui Tan, Mo Li
This paper presents the design of a system called PriMask, in which the mobile device uses a secret small-scale neural network called MaskNet to mask the data before transmission.
1 code implementation • 24 Aug 2022 • Sijie Ji, Mo Li
In this paper, we propose a jigsaw puzzles aided training strategy (JPTS) to enhance the deep learning-based Massive MIMO CSI feedback approaches by maximizing mutual information between the original CSI and the compressed CSI.
1 code implementation • 15 Feb 2021 • Sijie Ji, Mo Li
Numerous deep learning for massive MIMO CSI feedback approaches have demonstrated their efficiency and potential.
no code implementations • 11 Jun 2020 • A. Gilad Kusne, Heshan Yu, Changming Wu, Huairuo Zhang, Jason Hattrick-Simpers, Brian DeCost, Suchismita Sarker, Corey Oses, Cormac Toher, Stefano Curtarolo, Albert V. Davydov, Ritesh Agarwal, Leonid A. Bendersky, Mo Li, Apurva Mehta, Ichiro Takeuchi
Active learning - the field of machine learning (ML) dedicated to optimal experiment design, has played a part in science as far back as the 18th century when Laplace used it to guide his discovery of celestial mechanics [1].