Search Results for author: Tristan Vanderbruggen

Found 6 papers, 1 papers with code

HPC-GPT: Integrating Large Language Model for High-Performance Computing

no code implementations3 Oct 2023 Xianzhong Ding, Le Chen, Murali Emani, Chunhua Liao, Pei-Hung Lin, Tristan Vanderbruggen, Zhen Xie, Alberto E. Cerpa, Wan Du

Large Language Models (LLMs), including the LLaMA model, have exhibited their efficacy across various general-domain natural language processing (NLP) tasks.

Language Modelling Large Language Model

Data Race Detection Using Large Language Models

no code implementations15 Aug 2023 Le Chen, Xianzhong Ding, Murali Emani, Tristan Vanderbruggen, Pei-Hung Lin, Chuanhua Liao

Large language models (LLMs) are demonstrating significant promise as an alternate strategy to facilitate analyses and optimizations of high-performance computing programs, circumventing the need for resource-intensive manual tool creation.

LM4HPC: Towards Effective Language Model Application in High-Performance Computing

no code implementations26 Jun 2023 Le Chen, Pei-Hung Lin, Tristan Vanderbruggen, Chunhua Liao, Murali Emani, Bronis de Supinski

In recent years, language models (LMs), such as GPT-4, have been widely used in multiple domains, including natural language processing, visualization, and so on.

Language Modelling

Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study

no code implementations3 Nov 2022 Pei-Hung Lin, Chunhua Liao, Winson Chen, Tristan Vanderbruggen, Murali Emani, Hailu Xu

The FAIR Guiding Principles aim to improve the findability, accessibility, interoperability, and reusability of digital content by making them both human and machine actionable.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.