Search Results for author: Aydin Buluc

Found 8 papers, 4 papers with code

Distributed Matrix-Based Sampling for Graph Neural Network Training

no code implementations6 Nov 2023 Alok Tripathy, Katherine Yelick, Aydin Buluc

We provide experimental results on the largest Open Graph Benchmark (OGB) datasets on $128$ GPUs, and show that our pipeline is $2. 5\times$ faster than Quiver (a distributed extension to PyTorch-Geometric) on a $3$-layer GraphSAGE network.

PersGNN: Applying Topological Data Analysis and Geometric Deep Learning to Structure-Based Protein Function Prediction

no code implementations30 Oct 2020 Nicolas Swenson, Aditi S. Krishnapriyan, Aydin Buluc, Dmitriy Morozov, Katherine Yelick

Understanding protein structure-function relationships is a key challenge in computational biology, with applications across the biotechnology and pharmaceutical industries.

Graph Representation Learning Protein Function Prediction +1

Parallel String Graph Construction and Transitive Reduction for De Novo Genome Assembly

3 code implementations20 Oct 2020 Giulia Guidi, Oguz Selvitopi, Marquita Ellis, Leonid Oliker, Katherine Yelick, Aydin Buluc

In this work, we introduce new distributed-memory parallel algorithms for overlap detection and layout simplification steps of de novo genome assembly, and implement them in the diBELLA 2D pipeline.

Distributed, Parallel, and Cluster Computing Genomics

Reducing Communication in Graph Neural Network Training

2 code implementations7 May 2020 Alok Tripathy, Katherine Yelick, Aydin Buluc

Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data.

GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU

1 code implementation4 Aug 2019 Carl Yang, Aydin Buluc, John D. Owens

In this paper, we examine the performance challenges of a linear-algebra-based approach to building graph frameworks and describe new design principles for overcoming these bottlenecks.

Distributed, Parallel, and Cluster Computing Mathematical Software

Integrated Model, Batch and Domain Parallelism in Training Neural Networks

no code implementations12 Dec 2017 Amir Gholami, Ariful Azad, Peter Jin, Kurt Keutzer, Aydin Buluc

We propose a new integrated method of exploiting model, batch and domain parallelism for the training of deep neural networks (DNNs) on large distributed-memory computers using minibatch stochastic gradient descent (SGD).

Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation

1 code implementation30 Oct 2017 Penporn Koanantakool, Alnur Ali, Ariful Azad, Aydin Buluc, Dmitriy Morozov, Leonid Oliker, Katherine Yelick, Sang-Yun Oh

Across a variety of scientific disciplines, sparse inverse covariance estimation is a popular tool for capturing the underlying dependency relationships in multivariate data.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.