Search Results for author: Jihun Yun

Found 7 papers, 1 papers with code

TEDDY: Trimming Edges with Degree-based Discrimination strategY

1 code implementation2 Feb 2024 Hyunjin Seo, Jihun Yun, Eunho Yang

Since the pioneering work on the lottery ticket hypothesis for graph neural networks (GNNs) was proposed in Chen et al. (2021), the study on finding graph lottery tickets (GLT) has become one of the pivotal focus in the GNN community, inspiring researchers to discover sparser GLT while achieving comparable performance to original dense networks.

Adaptive Proximal Gradient Methods for Structured Neural Networks

no code implementations NeurIPS 2021 Jihun Yun, Aurelie C. Lozano, Eunho Yang

We consider the training of structured neural networks where the regularizer can be non-smooth and possibly non-convex.

Quantization

Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss

no code implementations ICCV 2021 Jung Hyun Lee, Jihun Yun, Sung Ju Hwang, Eunho Yang

Network quantization, which aims to reduce the bit-lengths of the network weights and activations, has emerged for their deployments to resource-limited devices.

Quantization

Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization

no code implementations1 Jan 2021 Jung Hyun Lee, Jihun Yun, Sung Ju Hwang, Eunho Yang

As a natural extension of DropBits, we further introduce the way of learning heterogeneous quantization levels to find proper bit-length for each layer using DropBits.

Quantization

A General Family of Stochastic Proximal Gradient Methods for Deep Learning

no code implementations15 Jul 2020 Jihun Yun, Aurelie C. Lozano, Eunho Yang

We propose a unified framework for stochastic proximal gradient descent, which we term ProxGen, that allows for arbitrary positive preconditioners and lower semi-continuous regularizers.

Quantization

Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization

no code implementations29 Nov 2019 Jung Hyun Lee, Jihun Yun, Sung Ju Hwang, Eunho Yang

As a natural extension of DropBits, we further introduce the way of learning heterogeneous quantization levels to find proper bit-length for each layer using DropBits.

Quantization

Stochastic Gradient Methods with Block Diagonal Matrix Adaptation

no code implementations26 May 2019 Jihun Yun, Aurelie C. Lozano, Eunho Yang

Extensive experiments reveal that block-diagonal approaches achieve state-of-the-art results on several deep learning tasks, and can outperform adaptive diagonal methods, vanilla Sgd, as well as a modified version of full-matrix adaptation proposed very recently.

Cannot find the paper you are looking for? You can Submit a new open access paper.