Search Results for author: Seungwon Lee

Found 11 papers, 0 papers with code

Learning Service Selection Decision Making Behaviors During Scientific Workflow Development

no code implementations30 Mar 2024 Xihao Xie, Jia Zhang, Rahul Ramachandran, Tsengdar J. Lee, Seungwon Lee

In this paper, a novel context-aware approach is proposed to recommending next services in a workflow development process, through learning service representation and service selection decision making behaviors from workflow provenance.

Decision Making Sentence

Breaking MLPerf Training: A Case Study on Optimizing BERT

no code implementations4 Feb 2024 YongDeok Kim, Jaehyung Ahn, Myeongwoo Kim, Changin Choi, Heejae Kim, Narankhuu Tuvshinjargal, Seungwon Lee, Yanzi Zhang, Yuan Pei, Xiongzhan Linghu, Jingkun Ma, Lin Chen, Yuehua Dai, Sungjoo Yoo

Speeding up the large-scale distributed training is challenging in that it requires improving various components of training including load balancing, communication, optimizers, etc.

Hyperparameter Optimization

Goal-Driven Context-Aware Next Service Recommendation for Mashup Composition

no code implementations25 Oct 2022 Xihao Xie, Jia Zhang, Rahul Ramachandran, Tsengdar J. Lee, Seungwon Lee

As service-oriented architecture becoming one of the most prevalent techniques to rapidly deliver functionalities to customers, increasingly more reusable software components have been published online in forms of web services.

Decision Making

Learning Context-Aware Service Representation for Service Recommendation in Workflow Composition

no code implementations24 May 2022 Xihao Xie, Jia Zhang, Rahul Ramachandran, Tsengdar J. Lee, Seungwon Lee

As increasingly more software services have been published onto the Internet, it remains a significant challenge to recommend suitable services to facilitate scientific workflow composition.

Sentence

Mako: Semi-supervised continual learning with minimal labeled data via data programming

no code implementations29 Sep 2021 Pengyuan Lu, Seungwon Lee, Amanda Watson, David Kent, Insup Lee, Eric Eaton, James Weimer

This tool achieves similar performance, in terms of per-task accuracy and resistance to catastrophic forgetting, as compared to fully labeled data.

Continual Learning Image Classification

Data-free mixed-precision quantization using novel sensitivity metric

no code implementations18 Mar 2021 DongHyun Lee, Minkyoung Cho, Seungwon Lee, Joonho Song, Changkyu Choi

Post-training quantization is a representative technique for compressing neural networks, making them smaller and more efficient for deployment on edge devices.

Quantization

Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer

no code implementations ICML Workshop LifelongML 2020 Seungwon Lee, Sima Behpour, Eric Eaton

In deep networks, transferring the appropriate granularity of knowledge is as important as the transfer mechanism, and must be driven by the relationships among tasks.

Revisiting Classical Bagging with Modern Transfer Learning for On-the-fly Disaster Damage Detector

no code implementations4 Oct 2019 Junghoon Seo, Seungwon Lee, Beomsu Kim, Taegyun Jeon

In this paper, we revisit the classical bootstrap aggregating approach in the context of modern transfer learning for data-efficient disaster damage detection.

Change Detection Disentanglement +3

Training Deep Neural Network in Limited Precision

no code implementations12 Oct 2018 Hyunsun Park, Jun Haeng Lee, Youngmin Oh, Sangwon Ha, Seungwon Lee

Energy and resource efficient training of DNNs will greatly extend the applications of deep learning.

Quantization for Rapid Deployment of Deep Neural Networks

no code implementations ICLR 2019 Jun Haeng Lee, Sangwon Ha, Saerom Choi, Won-Jo Lee, Seungwon Lee

This paper aims at rapid deployment of the state-of-the-art deep neural networks (DNNs) to energy efficient accelerators without time-consuming fine tuning or the availability of the full datasets.

object-detection Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.