1 code implementation • 17 Apr 2024 • Yeonguk Yu, Sungho Shin, Seunghyeok Back, Minhwan Ko, Sangjun Noh, Kyoobin Lee
After blocks are adjusted for current test domain, we generate pseudo-labels by averaging given test images and corresponding flipped counterparts.
no code implementations • 28 Jun 2023 • Seunghyeok Back, Sangbeom Lee, KangMin Kim, Joosoon Lee, Sungho Shin, Jemo Maeng, Kyoobin Lee
Moreover, a real-world robotic experiment demonstrated the practical applicability of our method in improving the performance of target object grasping tasks in cluttered environments.
1 code implementation • 8 Mar 2023 • Sungho Shin, Yeonguk Yu, Kyoobin Lee
This approach differs from conventional knowledge distillation frameworks, which use the L_p distance metrics and offer the advantage of converging well when reducing the distance between features of different resolutions.
no code implementations • 22 Dec 2022 • Alexander Engelmann, Sungho Shin, François Pacaud, Victor M. Zavala
The real-time operation of large-scale infrastructure networks requires scalable optimization capabilities.
1 code implementation • CVPR 2023 • Yeonguk Yu, Sungho Shin, Seongju Lee, Changhyun Jun, Kyoobin Lee
In this study, we first revealed that a norm of the feature map obtained from the other block than the last block can be a better indicator of OOD detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 29 Sep 2022 • Sungho Shin, Joosoon Lee, Junseok Lee, Yeonguk Yu, Kyoobin Lee
Deep learning has achieved outstanding performance for face recognition benchmarks, but performance reduces significantly for low resolution (LR) images.
1 code implementation • 12 Apr 2022 • Sungho Shin, Yiheng Lin, Guannan Qu, Adam Wierman, Mihai Anitescu
This paper studies the trade-off between the degree of decentralization and the performance of a distributed controller in a linear-quadratic control setting.
no code implementations • 8 Jan 2021 • Sungho Shin, Mihai Anitescu, Victor M. Zavala
We study solution sensitivity for nonlinear programs (NLPs) whose structures are induced by graphs.
Stochastic Optimization Optimization and Control
no code implementations • 7 Jan 2021 • Joosoon Lee, Seongju Lee, Seunghyeok Back, Sungho Shin, Kyoobin Lee
Understanding assembly instruction has the potential to enhance the robot s task planning ability and enables advanced robotic applications.
1 code implementation • 5 Nov 2020 • Paul F. Lang, Sungho Shin, Victor M. Zavala
Motivation: Estimating model parameters from experimental observations is one of the key challenges in systems biology and can be computationally very expensive.
no code implementations • 30 Sep 2020 • Yoonho Boo, Sungho Shin, Jungwook Choi, Wonyong Sung
In this study, we propose stochastic precision ensemble training for QDNNs (SPEQ).
no code implementations • 5 Sep 2020 • Wonyong Sung, Iksoo Choi, Jinhwan Park, Seokhyun Choi, Sungho Shin
The proposed method is compared with the conventional SGD method and previous weight-noise injection algorithms using convolutional neural networks for image classification.
no code implementations • 22 Aug 2020 • Jongwon Kim, Sungho Shin, Yeonguk Yu, Junseok Lee, Kyoobin Lee
We divided a single deep learning architecture into a common extractor, a cloud model and a local classifier for the distributed learning.
2 code implementations • 9 Jun 2020 • Jordan Jalving, Sungho Shin, Victor M. Zavala
We present a general graph-based modeling abstraction for optimization that we call an OptiGraph.
Optimization and Control
no code implementations • 31 May 2020 • Yoonho Boo, Sungho Shin, Wonyong Sung
This study proposes a holistic approach for the optimization of QDNNs, which contains QDNN training methods as well as quantization-friendly architecture design.
no code implementations • 14 May 2020 • Sen Na, Sungho Shin, Mihai Anitescu, Victor M. Zavala
We study the convergence properties of an overlapping Schwarz decomposition algorithm for solving nonlinear optimal control problems (OCPs).
1 code implementation • 16 Mar 2020 • Sungho Shin, Qiugang Lu, Victor M. Zavala
This paper presents unifying results for subspace identification (SID) and dynamic mode decomposition (DMD) for autonomous dynamical systems.
no code implementations • 12 Mar 2020 • Sungho Shin, Alex D. Smith, S. Joe Qin, Victor M. Zavala
In this work, we show that this algorithm is a specialized variant of a coordinate maximization algorithm.
no code implementations • 2 Feb 2020 • Sungho Shin, Yoonho Boo, Wonyong Sung
Model averaging is a promising approach for achieving the good generalization capability of DNNs, especially when the loss surface for training contains many sharp minima.
no code implementations • 4 Sep 2019 • Sungho Shin, Yoonho Boo, Wonyong Sung
Knowledge distillation (KD) is a very popular method for model size reduction.
no code implementations • NeurIPS 2018 • Jinhwan Park, Yoonho Boo, Iksoo Choi, Sungho Shin, Wonyong Sung
The RNN implementation on embedded devices can suffer from excessive DRAM accesses because the parameter size of a neural network usually exceeds that of the cache memory and the parameters are used only once for each time step.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 27 Feb 2017 • Sungho Shin, Yoonho Boo, Wonyong Sung
Fixed-point optimization of deep neural networks plays an important role in hardware based design and low-power implementations.
no code implementations • 19 Nov 2016 • Sungho Shin, Kyuyeon Hwang, Wonyong Sung
The complexity of deep neural network algorithms for hardware implementation can be lowered either by scaling the number of units or reducing the word-length of weights.
no code implementations • 30 Sep 2016 • Minjae Lee, Kyuyeon Hwang, Jinhwan Park, Sungwook Choi, Sungho Shin, Wonyong Sung
The weights are quantized to 6 bits to store all of them in the on-chip memory of an FPGA.
no code implementations • 14 Aug 2016 • Sungho Shin, Kyuyeon Hwang, Wonyong Sung
In this paper, we propose a generative knowledge transfer technique that trains an RNN based language model (student network) using text and output probabilities generated from a previously trained RNN (teacher network).
no code implementations • 14 Aug 2016 • Sungho Shin, Wonyong Sung
Gesture recognition is a very essential technology for many wearable devices.
no code implementations • 4 Dec 2015 • Sungho Shin, Kyuyeon Hwang, Wonyong Sung
Recurrent neural networks have shown excellent performance in many applications, however they require increased complexity in hardware or software based implementations.
no code implementations • 20 Nov 2015 • Wonyong Sung, Sungho Shin, Kyuyeon Hwang
In this work, the effects of retraining are analyzed for a feedforward deep neural network (FFDNN) and a convolutional neural network (CNN).