no code implementations • Findings of the Association for Computational Linguistics 2020 • Xin Guo, Yu Tian, Qinghan Xue, Panos Lampropoulos, Steven Eliuk, Kenneth Barner, Xiaolong Wang
Catastrophic forgetting in neural networks indicates the performance decreasing of deep learning models on previous tasks while learning new tasks.
no code implementations • 25 Sep 2019 • Yang Sun, Abhishek Kolagunda, Steven Eliuk, Xiaolong Wang
During the training stage, we utilize all the available data (labeled and unlabeled) to train the classifier via a semi-supervised generative framework.
no code implementations • 3 Dec 2018 • Christian Pinto, Yiannis Gkoufas, Andrea Reale, Seetharami Seelam, Steven Eliuk
Deep Learning system architects strive to design a balanced system where the computational accelerator -- FPGA, GPU, etc, is not starved for data.
Performance
no code implementations • 19 Nov 2016 • Steven Eliuk, Cameron Upright, Hars Vardhan, Stephen Walsh, Trevor Gale
The paper presents a parallel math library, dMath, that demonstrates leading scaling when using intranode, internode, and hybrid-parallelism for deep learning (DL).
no code implementations • 5 Apr 2016 • Steven Eliuk, Cameron Upright, Anthony Skjellum
A new scalable parallel math library, dMath, is presented in this paper that demonstrates leading scaling when using intranode, or internode, hybrid-parallelism for deep-learning.