Search Results for author: Abdul Wasay

Found 5 papers, 2 papers with code

Domain-Specific Code Language Models: Unraveling the Potential for HPC Codes and Tasks

2 code implementations20 Dec 2023 Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien, Mihai Capota, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval Pinter, Timothy Mattson, Gal Oren

Specifically, we start off with HPC as a domain and build an HPC-specific LM, named MonoCoder, that is orders of magnitude smaller than existing LMs but delivers similar, if not better performance, on non-HPC and HPC tasks.

Code Generation

Scope is all you need: Transforming LLMs for HPC Code

2 code implementations18 Aug 2023 Tal Kadosh, Niranjan Hasabnis, Vy A. Vo, Nadav Schneider, Neva Krien, Abdul Wasay, Nesreen Ahmed, Ted Willke, Guy Tamir, Yuval Pinter, Timothy Mattson, Gal Oren

With easier access to powerful compute resources, there is a growing trend in the field of AI for software development to develop larger and larger language models (LLMs) to address a variety of programming tasks.

Code Completion

Silhouette: Toward Performance-Conscious and Transferable CPU Embeddings

no code implementations15 Dec 2022 Tarikul Islam Papon, Abdul Wasay

Learned embeddings are widely used to obtain concise data representation and enable transfer learning between different data sets and tasks.

Transfer Learning

More or Less: When and How to Build Neural Network Ensembles

no code implementations ICLR 2021 Abdul Wasay, Stratos Idreos

We identify a critical part of this design space that is not well-understood: That is how to decide between the alternatives of expanding a single network model or increasing the number of networks and using them together in an ensemble.

Navigate

MotherNets: Rapid Deep Ensemble Learning

no code implementations12 Sep 2018 Abdul Wasay, Brian Hentschel, Yuze Liao, Sanyuan Chen, Stratos Idreos

We propose MotherNets to enable higher accuracy and practical training cost for large and diverse neural network ensembles: A MotherNet captures the structural similarity across some or all members of a deep neural network ensemble which allows us to share data movement and computation costs across these networks.

Clustering Clustering Ensemble +2

Cannot find the paper you are looking for? You can Submit a new open access paper.