Search Results for author: Valerie Taylor

Found 6 papers, 3 papers with code

Autotuning Apache TVM-based Scientific Applications Using Bayesian Optimization

1 code implementation13 Sep 2023 Xingfu Wu, Praveen Paramasivam, Valerie Taylor

Apache TVM (Tensor Virtual Machine), an open source machine learning compiler framework designed to optimize computations across various hardware platforms, provides an opportunity to improve the performance of dense matrix factorizations such as LU (Lower Upper) decomposition and Cholesky decomposition on GPUs and AI (Artificial Intelligence) accelerators.

Bayesian Optimization

ytopt: Autotuning Scientific Applications for Energy Efficiency at Large Scales

1 code implementation28 Mar 2023 Xingfu Wu, Prasanna Balaprakash, Michael Kruse, Jaehoon Koo, Brice Videau, Paul Hovland, Valerie Taylor, Brad Geltz, Siddhartha Jana, Mary Hall

As we enter the exascale computing era, efficiently utilizing power and optimizing the performance of scientific applications under power and energy constraints has become critical and challenging.

Bayesian Optimization

Autotuning PolyBench Benchmarks with LLVM Clang/Polly Loop Optimization Pragmas Using Bayesian Optimization (extended version)

1 code implementation27 Apr 2021 Xingfu Wu, Michael Kruse, Prasanna Balaprakash, Hal Finkel, Paul Hovland, Valerie Taylor, Mary Hall

In this paper, we develop a ytopt autotuning framework that leverages Bayesian optimization to explore the parameter space search and compare four different supervised learning methods within Bayesian optimization and evaluate their effectiveness.

Bayesian Optimization

Performance and Power Modeling and Prediction Using MuMMI and Ten Machine Learning Methods

no code implementations12 Nov 2020 Xingfu Wu, Valerie Taylor, Zhiling Lan

In this paper, we use modeling and prediction tool MuMMI (Multiple Metrics Modeling Infrastructure) and ten machine learning methods to model and predict performance and power and compare their prediction error rates.

BIG-bench Machine Learning

Utilizing Ensemble Learning for Performance and Power Modeling and Improvement of Parallel Cancer Deep Learning CANDLE Benchmarks

no code implementations12 Nov 2020 Xingfu Wu, Valerie Taylor

In this paper, we utilize ensemble learning to combine linear, nonlinear, and tree-/rule-based ML methods to cope with the bias-variance tradeoff and result in more accurate models.

Ensemble Learning

Autotuning PolyBench Benchmarks with LLVM Clang/Polly Loop Optimization Pragmas Using Bayesian Optimization

no code implementations15 Oct 2020 Xingfu Wu, Michael Kruse, Prasanna Balaprakash, Hal Finkel, Paul Hovland, Valerie Taylor, Mary Hall

An autotuning is an approach that explores a search space of possible implementations/configurations of a kernel or an application by selecting and evaluating a subset of implementations/configurations on a target platform and/or use models to identify a high performance implementation/configuration.

Bayesian Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.