Search Results for author: Samir Arora

Found 1 papers, 0 papers with code

SPAFIT: Stratified Progressive Adaptation Fine-tuning for Pre-trained Large Language Models

no code implementations30 Apr 2024 Samir Arora, Liangliang Wang

Full fine-tuning is a popular approach to adapt Transformer-based pre-trained large language models to a specific downstream task.

Cannot find the paper you are looking for? You can Submit a new open access paper.