no code implementations • 27 Sep 2023 • Khuong Vo, Mostafa El-Khamy, Yoojin Choi
Here, we propose a subject-independent attention-based deep state-space model to translate PPG signals to corresponding ECG waveforms.
no code implementations • 26 Oct 2022 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
We propose a novel method for training a conditional generative adversarial network (CGAN) without the use of training data, called zero-shot learning of a CGAN (ZS-CGAN).
no code implementations • 11 Oct 2022 • Sijia Wang, Yoojin Choi, Junya Chen, Mostafa El-Khamy, Ricardo Henao
This results in the eventual prohibitive expansion of the knowledge repository if we consider learning from a long sequence of tasks.
no code implementations • 17 Jun 2021 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
In the conventional generative replay, the generative model is pre-trained for old data and shared in extra memory for later incremental learning.
1 code implementation • 8 May 2020 • Yoojin Choi, Jihwan Choi, Mostafa El-Khamy, Jungwon Lee
The synthetic data are generated from a generator, while no data are used in training the generator and in quantization.
no code implementations • 25 Sep 2019 • J. Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee
A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks.
no code implementations • ICCV 2019 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.
no code implementations • 27 May 2019 • J. Jon Ryu, Yoojin Choi, Young-Han Kim, Mostafa El-Khamy, Jungwon Lee
A new bimodal generative model is proposed for generating conditional and joint samples, accompanied with a training method with learning a succinct bottleneck representation.
no code implementations • 21 Feb 2019 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
We consider the optimization of deep convolutional neural networks (CNNs) such that they provide good performance while having reduced complexity if deployed on either conventional systems with spatial-domain convolution or lower-complexity systems designed for Winograd convolution.
no code implementations • 1 Sep 2018 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
In training low-precision networks, gradient descent in the backward pass is performed with high-precision weights while quantized low-precision weights and activations are used in the forward pass to calculate the loss function for training.
no code implementations • 21 May 2018 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
In particular, the proposed framework produces one compressed model whose convolutional filters can be made sparse either in the spatial domain or in the Winograd domain.
no code implementations • NIPS Workshop CDNNRIA 2018 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment.
no code implementations • 5 Dec 2016 • Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks.