Search Results for author: Sean Lie

Found 3 papers, 1 papers with code

MediSwift: Efficient Sparse Pre-trained Biomedical Language Models

no code implementations1 Mar 2024 Vithursan Thangarasa, Mahmoud Salem, Shreyas Saxena, Kevin Leong, Joel Hestness, Sean Lie

Large language models (LLMs) are typically trained on general source data for various domains, but a recent surge in domain-specific LLMs has shown their potential to outperform general-purpose models in domain-specific tasks (e. g., biomedicine).

Question Answering

Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency

2 code implementations21 Mar 2023 Vithursan Thangarasa, Shreyas Saxena, Abhay Gupta, Sean Lie

Recent research has focused on weight sparsity in neural network training to reduce FLOPs, aiming for improved efficiency (test accuracy w. r. t training FLOPs).

SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models

no code implementations18 Mar 2023 Vithursan Thangarasa, Abhay Gupta, William Marshall, Tianda Li, Kevin Leong, Dennis Decoste, Sean Lie, Shreyas Saxena

In this work, we show the benefits of using unstructured weight sparsity to train only a subset of weights during pre-training (Sparse Pre-training) and then recover the representational capacity by allowing the zeroed weights to learn (Dense Fine-tuning).

Text Generation Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.