Search Results for author: Byung-Gon Chun

Found 12 papers, 3 papers with code

Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

no code implementations NeurIPS 2021 Taebum Kim, Eunji Jeong, Geon-Woo Kim, Yunmo Koo, Sehoon Kim, Gyeong-In Yu, Byung-Gon Chun

Recently, several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution.

SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search

no code implementations ICLR 2022 Hyeonmin Ha, Ji-Hoon Kim, Semin Park, Byung-Gon Chun

We propose Supernet with Unbiased Meta-Features for Neural Architecture Search (SUMNAS), a supernet learning strategy based on meta-learning to tackle the knowledge forgetting issue.

Computational Efficiency Meta-Learning +1

Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning

1 code implementation NeurIPS 2020 Woosuk Kwon, Gyeong-In Yu, Eunji Jeong, Byung-Gon Chun

Ideally, DL frameworks should be able to fully utilize the computation power of GPUs such that the running time depends on the amount of computation assigned to GPUs.

Scheduling

Accelerating Multi-Model Inference by Merging DNNs of Different Weights

no code implementations28 Sep 2020 Joo Seong Jeong, Soojeong Kim, Gyeong-In Yu, Yunseong Lee, Byung-Gon Chun

Standardized DNN models that have been proved to perform well on machine learning tasks are widely used and often adopted as-is to solve downstream tasks, forming the transfer learning paradigm.

Transfer Learning

Hippo: Taming Hyper-parameter Optimization of Deep Learning with Stage Trees

no code implementations22 Jun 2020 Ahnjae Shin, Do Yoon Kim, Joo Seong Jeong, Byung-Gon Chun

Hyper-parameter optimization is crucial for pushing the accuracy of a deep learning model to its limits.

Stage-based Hyper-parameter Optimization for Deep Learning

no code implementations24 Nov 2019 Ahnjae Shin, Dong-Jin Shin, Sungwoo Cho, Do Yoon Kim, Eunji Jeong, Gyeong-In Yu, Byung-Gon Chun

As deep learning techniques advance more than ever, hyper-parameter optimization is the new major workload in deep learning clusters.

Making Classical Machine Learning Pipelines Differentiable: A Neural Translation Approach

1 code implementation10 Jun 2019 Gyeong-In Yu, Saeed Amizadeh, Sehoon Kim, Artidoro Pagnoni, Byung-Gon Chun, Markus Weimer, Matteo Interlandi

To this end, we propose a framework that translates a pre-trained ML pipeline into a neural network and fine-tunes the ML models within the pipeline jointly using backpropagation.

BIG-bench Machine Learning Translation

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs

no code implementations4 Dec 2018 Eunji Jeong, Sungwoo Cho, Gyeong-In Yu, Joo Seong Jeong, Dong-Jin Shin, Byung-Gon Chun

The rapid evolution of deep neural networks is demanding deep learning (DL) frameworks not only to satisfy the requirement of quickly executing large computations, but also to support straightforward programming models for quickly implementing and experimenting with complex network structures.

Improving the Expressiveness of Deep Learning Frameworks with Recursion

no code implementations4 Sep 2018 Eunji Jeong, Joo Seong Jeong, Soojeong Kim, Gyeong-In Yu, Byung-Gon Chun

Recursive neural networks have widely been used by researchers to handle applications with recursively or hierarchically structured data.

Parallax: Automatic Data-Parallel Training of Deep Neural Networks

1 code implementation8 Aug 2018 Soojeong Kim, Gyeong-In Yu, Hojin Park, Sungwoo Cho, Eunji Jeong, Hyeonmin Ha, Sanha Lee, Joo Seong Jeong, Byung-Gon Chun

The employment of high-performance servers and GPU accelerators for training deep neural network models have greatly accelerated recent advances in machine learning (ML).

Distributed, Parallel, and Cluster Computing

Predicting Execution Time of Computer Programs Using Sparse Polynomial Regression

no code implementations NeurIPS 2010 Ling Huang, Jinzhu Jia, Bin Yu, Byung-Gon Chun, Petros Maniatis, Mayur Naik

Our two SPORE algorithms are able to build relationships between responses (e. g., the execution time of a computer program) and features, and select a few from hundreds of the retrieved features to construct an explicitly sparse and non-linear model to predict the response variable.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.