no code implementations • 21 Jun 2022 • Chunxing Yin, Da Zheng, Israt Nisa, Christos Faloutos, George Karypis, Richard Vuduc
This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition.
1 code implementation • 19 Nov 2019 • Rahul Duggal, Cao Xiao, Richard Vuduc, Jimeng Sun
With CUP, we overcome two limitations of prior work-(1) non-uniform pruning: CUP can efficiently determine the ideal number of filters to prune in each layer of a neural network.
1 code implementation • 9 Nov 2018 • Patrick Lavin, Jeffrey Young, Jason Riedy, Richard Vuduc, Aaron Vose, Dan Ernst
This paper describes a new benchmark tool, Spatter, for assessing memory system architectures in the context of a specific category of indexed accesses known as gather and scatter.
Performance
no code implementations • 14 Mar 2018 • Ioakeim Perros, Evangelos E. Papalexakis, Haesun Park, Richard Vuduc, Xiaowei Yan, Christopher deFilippi, Walter F. Stewart, Jimeng Sun
We propose two variants, SUSTain_M and SUSTain_T, to handle both matrix and tensor inputs, respectively.
no code implementations • 13 Mar 2017 • Ioakeim Perros, Evangelos E. Papalexakis, Fei Wang, Richard Vuduc, Elizabeth Searles, Michael Thompson, Jimeng Sun
For example, when modeling medical features across a set of patients, the number and duration of treatments may vary widely in time, meaning there is no meaningful way to align their clinical records across time points for analysis purposes.
no code implementations • 25 Oct 2016 • Ioakeim Perros, Robert Chen, Richard Vuduc, Jimeng Sun
It can also do so more accurately and in less time than the state-of-the-art: on a 12th order subset of the input data, Sparse H-Tucker is 18x more accurate and 7. 5x faster than a previously state-of-the-art method.