Search Results for author: Xi Jin

Found 3 papers, 0 papers with code

Hardware Acceleration of Sampling Algorithms in Sample and Aggregate Graph Neural Networks

no code implementations7 Sep 2022 Yuchen Gui, Boyi Wei, Wei Yuan, Xi Jin

Sampling is an important process in many GNN structures in order to train larger datasets with a smaller computational complexity.

BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training

no code implementations23 Dec 2020 Letian Zhao, Rui Xu, Tianqi Wang, Teng Tian, Xiaotian Wang, Wei Wu, Chio-in Ieong, Xi Jin

The size of deep neural networks (DNNs) grows rapidly as the complexity of the machine learning algorithm increases.

FPDeep: Scalable Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters

no code implementations4 Jan 2019 Tong Geng, Tianqi Wang, Ang Li, Xi Jin, Martin Herbordt

Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size.

Cannot find the paper you are looking for? You can Submit a new open access paper.