Search Results for author: Joseph McDonald

Found 10 papers, 1 papers with code

Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale

no code implementations25 Feb 2024 Dan Zhao, Siddharth Samsi, Joseph McDonald, Baolin Li, David Bestor, Michael Jones, Devesh Tiwari, Vijay Gadepally

In this paper, we study the aggregate effect of power-capping GPUs on GPU temperature and power draw at a research supercomputing center.

A Benchmark Dataset for Tornado Detection and Prediction using Full-Resolution Polarimetric Weather Radar Data

no code implementations26 Jan 2024 Mark S. Veillette, James M. Kurdzo, Phillip M. Stepanian, John Y. N. Cho, Siddharth Samsi, Joseph McDonald

A number of ML baselines for tornado detection are developed and compared, including a novel deep learning (DL) architecture capable of processing raw radar imagery without the need for manual feature extraction required for existing ML algorithms.

Feature Engineering

A Green(er) World for A.I

no code implementations27 Jan 2023 Dan Zhao, Nathan C. Frey, Joseph McDonald, Matthew Hubbell, David Bestor, Michael Jones, Andrew Prout, Vijay Gadepally, Siddharth Samsi

applications, we are sure to face an ever-mounting energy footprint to sustain these computational budgets, data storage needs, and more.

An Evaluation of Low Overhead Time Series Preprocessing Techniques for Downstream Machine Learning

no code implementations12 Sep 2022 Matthew L. Weiss, Joseph McDonald, David Bestor, Charles Yee, Daniel Edelman, Michael Jones, Andrew Prout, Andrew Bowne, Lindsey McEvoy, Vijay Gadepally, Siddharth Samsi

Our best performing models achieve a classification accuracy greater than 95%, outperforming previous approaches to multi-channel time series classification with the MIT SuperCloud Dataset by 5%.

Classification Time Series +2

Great Power, Great Responsibility: Recommendations for Reducing Energy for Training Language Models

no code implementations Findings (NAACL) 2022 Joseph McDonald, Baolin Li, Nathan Frey, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi

In particular, we focus on techniques to measure energy usage and different hardware and datacenter-oriented settings that can be tuned to reduce energy consumption for training and inference for language models.

Cloud Computing Language Modelling

Benchmarking Resource Usage for Efficient Distributed Deep Learning

no code implementations28 Jan 2022 Nathan C. Frey, Baolin Li, Joseph McDonald, Dan Zhao, Michael Jones, David Bestor, Devesh Tiwari, Vijay Gadepally, Siddharth Samsi

Deep learning (DL) workflows demand an ever-increasing budget of compute and energy in order to achieve outsized gains.

Benchmarking

Scalable Geometric Deep Learning on Molecular Graphs

1 code implementation NeurIPS Workshop AI4Scien 2021 Nathan C. Frey, Siddharth Samsi, Joseph McDonald, Lin Li, Connor W. Coley, Vijay Gadepally

Deep learning in molecular and materials sciences is limited by the lack of integration between applied science, artificial intelligence, and high-performance computing.

Cannot find the paper you are looking for? You can Submit a new open access paper.