no code implementations • 28 Mar 2023 • Minsoo Kang, Hyewon Yoo, Eunhee Kang, Sehwan Ki, Hyong-Euk Lee, Bohyung Han
We propose an information-theoretic knowledge distillation approach for the compression of generative adversarial networks, which aims to maximize the mutual information between teacher and student networks via a variational optimization based on an energy-based model.
no code implementations • 14 Feb 2019 • Soomin Seo, Sehwan Ki, Munchurl Kim
Recently, due to the strength of deep convolutional neural networks (CNN), many CNN-based image quality assessment (IQA) models have been studied.