Search Results for author: James Liu

Found 1 papers, 1 papers with code

BitDelta: Your Fine-Tune May Only Be Worth One Bit

1 code implementation15 Feb 2024 James Liu, Guangxuan Xiao, Kai Li, Jason D. Lee, Song Han, Tri Dao, Tianle Cai

Large Language Models (LLMs) are typically trained in two phases: pre-training on large internet-scale datasets, and fine-tuning for downstream tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.