1 code implementation • 1 Feb 2024 • Xuchen Pan, Yanxi Chen, Yaliang Li, Bolin Ding, Jingren Zhou
This work introduces EE-Tuning, a lightweight and economical solution to training/tuning early-exit large language models (LLMs).
1 code implementation • 8 Dec 2023 • Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, Jingren Zhou
We present EE-LLM, a framework for large-scale training and inference of early-exit large language models (LLMs).
no code implementations • 13 Apr 2023 • Yanxi Chen
This technical report studies the problem of ranking from pairwise comparisons in the classical Bradley-Terry-Luce (BTL) model, with a focus on score estimation.
no code implementations • 31 Mar 2023 • Jianfeng Wu, Yi Su, Yanxi Chen, Wenhui Zhu, Eric M. Reiman, Richard J. Caselli, Kewei Chen, Paul M. Thompson, Junwen Wang, Yalin Wang
Objective: To build a surface-based model to 1) detect differences between APOE subgroups in patterns of tau deposition and hippocampal atrophy, and 2) use the extracted surface-based features to predict cognitive decline.
no code implementations • 30 Jan 2023 • Gen Li, Yanxi Chen, Yuejie Chi, H. Vincent Poor, Yuxin Chen
Efficient computation of the optimal transport distance between two distributions serves as an algorithm subroutine that empowers various applications.
no code implementations • 26 Jan 2022 • Yanxi Chen, H. Vincent Poor
We study the problem of learning a mixture of multiple linear dynamical systems (LDSs) from unlabeled short sample trajectories, each generated by one of the LDS models.
no code implementations • 23 Sep 2020 • Yanxi Chen, Cong Ma, H. Vincent Poor, Yuxin Chen
We study the problem of learning mixtures of low-rank models, i. e. reconstructing multiple low-rank matrices from unlabelled linear measurements of each.
no code implementations • 16 Aug 2017 • Yanxi Chen, Gen Li, Yuantao Gu
In this letter, we propose a novel Active OMP-SSC, which improves clustering accuracy of OMP-SSC by adaptively updating data points and randomly dropping data points in the OMP process, while still enjoying the low computational complexity of greedy pursuit algorithms.