Search Results for author: Sungwoo Cho

Found 4 papers, 1 papers with code

Deep Collective Knowledge Distillation

no code implementations18 Apr 2023 Jihyeon Seo, Kyusam Oh, Chanho Min, Yongkeun Yun, Sungwoo Cho

We propose deep collective knowledge distillation for model compression, called DCKD, which is a method for training student models with rich information to acquire knowledge from not only their teacher model but also other student models.

Knowledge Distillation Model Compression

Stage-based Hyper-parameter Optimization for Deep Learning

no code implementations24 Nov 2019 Ahnjae Shin, Dong-Jin Shin, Sungwoo Cho, Do Yoon Kim, Eunji Jeong, Gyeong-In Yu, Byung-Gon Chun

As deep learning techniques advance more than ever, hyper-parameter optimization is the new major workload in deep learning clusters.

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs

no code implementations4 Dec 2018 Eunji Jeong, Sungwoo Cho, Gyeong-In Yu, Joo Seong Jeong, Dong-Jin Shin, Byung-Gon Chun

The rapid evolution of deep neural networks is demanding deep learning (DL) frameworks not only to satisfy the requirement of quickly executing large computations, but also to support straightforward programming models for quickly implementing and experimenting with complex network structures.

Parallax: Automatic Data-Parallel Training of Deep Neural Networks

1 code implementation8 Aug 2018 Soojeong Kim, Gyeong-In Yu, Hojin Park, Sungwoo Cho, Eunji Jeong, Hyeonmin Ha, Sanha Lee, Joo Seong Jeong, Byung-Gon Chun

The employment of high-performance servers and GPU accelerators for training deep neural network models have greatly accelerated recent advances in machine learning (ML).

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.