Search Results for author: Pei-Hung Lin

Found 9 papers, 2 papers with code

HPC-GPT: Integrating Large Language Model for High-Performance Computing

no code implementations3 Oct 2023 Xianzhong Ding, Le Chen, Murali Emani, Chunhua Liao, Pei-Hung Lin, Tristan Vanderbruggen, Zhen Xie, Alberto E. Cerpa, Wan Du

Large Language Models (LLMs), including the LLaMA model, have exhibited their efficacy across various general-domain natural language processing (NLP) tasks.

Language Modelling Large Language Model +1

Towards Zero Memory Footprint Spiking Neural Network Training

no code implementations16 Aug 2023 Bin Lei, Sheng Lin, Pei-Hung Lin, Chunhua Liao, Caiwen Ding

Our design is able to achieve a $\mathbf{58. 65\times}$ reduction in memory usage compared to the current SNN node.

Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought

no code implementations16 Aug 2023 Bin Lei, Pei-Hung Lin, Chunhua Liao, Caiwen Ding

Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries.

GPT-4 Logical Reasoning

Data Race Detection Using Large Language Models

no code implementations15 Aug 2023 Le Chen, Xianzhong Ding, Murali Emani, Tristan Vanderbruggen, Pei-Hung Lin, Chuanhua Liao

Large language models (LLMs) are demonstrating significant promise as an alternate strategy to facilitate analyses and optimizations of high-performance computing programs, circumventing the need for resource-intensive manual tool creation.

Creating a Dataset for High-Performance Computing Code Translation using LLMs: A Bridge Between OpenMP Fortran and C++

1 code implementation15 Jul 2023 Bin Lei, Caiwen Ding, Le Chen, Pei-Hung Lin, Chunhua Liao

In this study, we present a novel dataset for training machine learning models translating between OpenMP Fortran and C++ code.

C++ code Code Translation +2

LM4HPC: Towards Effective Language Model Application in High-Performance Computing

no code implementations26 Jun 2023 Le Chen, Pei-Hung Lin, Tristan Vanderbruggen, Chunhua Liao, Murali Emani, Bronis de Supinski

In recent years, language models (LMs), such as GPT-4, have been widely used in multiple domains, including natural language processing, visualization, and so on.

GPT-4 Language Modelling

Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study

no code implementations3 Nov 2022 Pei-Hung Lin, Chunhua Liao, Winson Chen, Tristan Vanderbruggen, Murali Emani, Hailu Xu

The FAIR Guiding Principles aim to improve the findability, accessibility, interoperability, and reusability of digital content by making them both human and machine actionable.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.