Search Results for author: U Kang

Found 28 papers, 10 papers with code

Cold-start Bundle Recommendation via Popularity-based Coalescence and Curriculum Heating

1 code implementation5 Oct 2023 Hyunsik Jeon, Jong-eun Lee, Jeongin Yun, U Kang

To estimate the user-bundle relationship more accurately, CoHeat addresses the highly skewed distribution of bundle interactions through a popularity-based coalescence approach, which incorporates historical and affiliation information based on the bundle's popularity.

Contrastive Learning Marketing

Fast and Accurate Transferability Measurement by Evaluating Intra-class Feature Variance

no code implementations ICCV 2023 Huiwen Xu, U Kang

It is important to have a general method for measuring transferability that can be applied in a variety of situations, such as selecting the best self-supervised pre-trained models that do not have classifiers, and selecting the best transferring layer for a target task.

Transfer Learning

Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models

1 code implementation7 Aug 2023 Seungcheol Park, Hojun Choi, U Kang

As a result, K-prune shows significant accuracy improvements up to 58. 02%p higher F1 score compared to existing retraining-free pruning algorithms under a high compression rate of 80% on the SQuAD benchmark without any retraining process.

Language Modelling Model Compression

Fast and Accurate Dual-Way Streaming PARAFAC2 for Irregular Tensors -- Algorithm and Application

1 code implementation28 May 2023 Jun-Gi Jang, Jeongyoung Lee, Yong-chan Park, U Kang

Although real-time analysis is necessary in the dual-way streaming, static PARAFAC2 decomposition methods fail to efficiently work in this setting since they perform PARAFAC2 decomposition for accumulated tensors whenever new data arrive.

Accurate Open-set Recognition for Memory Workload

1 code implementation17 Dec 2022 Jun-Gi Jang, Sooyeon Shim, Vladimir Egay, Jeeyong Lee, Jongmin Park, Suhyun Chae, U Kang

How can we accurately identify new memory workloads while classifying known memory workloads?

Open Set Learning

Accurate Bundle Matching and Generation via Multitask Learning with Partially Shared Parameters

1 code implementation19 Oct 2022 Hyunsik Jeon, Jun-Gi Jang, Taehun Kim, U Kang

BundleMage effectively mixes user preferences of items and bundles using an adaptive gate technique to achieve high accuracy for the bundle matching.

Multi-Task Learning

Diversely Regularized Matrix Factorization for Accurate and Aggregately Diversified Recommendation

1 code implementation19 Oct 2022 Jongjin Kim, Hyunsik Jeon, Jaeri Lee, U Kang

However, it is challenging to tackle aggregate-level diversity with a matrix factorization (MF), one of the most common recommendation model, since skewed real world data lead to skewed recommendation results of MF.

Recommendation Systems

Accurate Action Recommendation for Smart Home via Two-Level Encoders and Commonsense Knowledge

1 code implementation12 Aug 2022 Hyunsik Jeon, Jongjin Kim, Hoyoung Yoon, Jaeri Lee, U Kang

SmartSense then summarizes sequences of users considering queried contexts in a query-attentive manner to extract the query-related patterns from the sequential actions.

Recommendation Systems

Accurate Node Feature Estimation with Structured Variational Graph Autoencoder

1 code implementation9 Jun 2022 Jaemin Yoo, Hyunsik Jeon, Jinhong Jung, U Kang

Given a graph with partial observations of node features, how can we estimate the missing features accurately?

Variational Inference

DPar2: Fast and Scalable PARAFAC2 Decomposition for Irregular Dense Tensors

no code implementations24 Mar 2022 Jun-Gi Jang, U Kang

In this paper, we propose DPar2, a fast and scalable PARAFAC2 decomposition method for irregular dense tensors.

Model-Agnostic Augmentation for Accurate Graph Classification

1 code implementation21 Feb 2022 Jaemin Yoo, Sooyeon Shim, U Kang

Then, we propose NodeSam (Node Split and Merge) and SubMix (Subgraph Mix), two model-agnostic approaches for graph augmentation that satisfy all desired properties with different motivations.

Graph Classification

Multi-EPL: Accurate Multi-source Domain Adaptation

no code implementations1 Jan 2021 Seongmin Lee, Hyunsik Jeon, U Kang

Given multiple source datasets with labels, how can we train a target model with no labeled data?

Domain Adaptation

Signed Graph Diffusion Network

no code implementations28 Dec 2020 Jinhong Jung, Jaemin Yoo, U Kang

In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.

Link Sign Prediction Network Embedding

T-GAP: Learning to Walk across Time for Temporal Knowledge Graph Completion

no code implementations19 Dec 2020 JaeHun Jung, Jinhong Jung, U Kang

However, most of the existing mod-els for TKG completion extend static KG embeddings that donot fully exploit TKG structure, thus lacking in 1) account-ing for temporally relevant events already residing in the lo-cal neighborhood of a query, and 2) path-based inference that facilitates multi-hop reasoning and better interpretability.

Relational Reasoning Temporal Knowledge Graph Completion +1

Time-Aware Tensor Decomposition for Missing Entry Prediction

no code implementations16 Dec 2020 Dawon Ahn, Jun-Gi Jang, U Kang

The essential problems of how to exploit the temporal property for tensor decomposition and consider the sparsity of time slices remain unresolved.

Tensor Decomposition

Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT

no code implementations30 Sep 2020 Ikhyun Cho, U Kang

PTP is a KD-specialized initialization method, which can act as a good initial guide for the student.

Knowledge Distillation Model Compression

Ensemble Multi-Source Domain Adaptation with Pseudolabels

no code implementations29 Sep 2020 Seongmin Lee, Hyunsik Jeon, U Kang

Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels.

Domain Adaptation Ensemble Learning

Pea-KD: Parameter-efficient and accurate Knowledge Distillation

no code implementations28 Sep 2020 Ikhyun Cho, U Kang

SPS is a new parameter sharing method that allows greater model complexity for the student model.

Knowledge Distillation Model Compression

Fast Partial Fourier Transform

no code implementations28 Aug 2020 Yong-chan Park, Jun-Gi Jang, U Kang

In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed.

Time Series Time Series Analysis

Fast and Accurate Transferability Measurement for Heterogeneous Multivariate Data

no code implementations23 Dec 2019 Seungcheol Park, Huiwen Xu, Taehun Kim, Inhwan Hwang, Kyung-Jun Kim, U Kang

We address the problem of measuring transferability between source and target datasets, where the source and the target have different feature spaces and distributions.

Knowledge Extraction with No Observable Data

1 code implementation NeurIPS 2019 Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang

Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.

Data-free Knowledge Distillation

FALCON: Lightweight and Accurate Convolution

no code implementations25 Sep 2019 Jun-Gi Jang, Chun Quan, Hyun Dong Lee, U Kang

By exploiting the knowledge of a trained standard model and carefully determining the order of depthwise separable convolution via GEP, FALCON achieves sufficient accuracy close to that of the trained standard model.

Tensor Decomposition

FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN

no code implementations25 Sep 2019 Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang

A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution.

Transfer Alignment Network for Double Blind Unsupervised Domain Adaptation

no code implementations25 Sep 2019 Huiwen Xu, U Kang

In this paper, we define the problem of unsupervised domain adaptation under double blind constraint, where either the source or the target domain cannot observe the data in the other domain, but data from both domains are used for training.

Transfer Learning Unsupervised Domain Adaptation

Data Context Adaptation for Accurate Recommendation with Additional Information

no code implementations22 Aug 2019 Hyunsik Jeon, Bonhun Koo, U Kang

Given a sparse rating matrix and an auxiliary matrix of users or items, how can we accurately predict missing ratings considering different data contexts of entities?

Efficient Learning of Bounded-Treewidth Bayesian Networks from Complete and Incomplete Data Sets

no code implementations7 Feb 2018 Mauro Scanagatta, Giorgio Corani, Marco Zaffalon, Jaemin Yoo, U Kang

We present a novel anytime algorithm (k-MAX) method for this task, which scales up to thousands of variables.

Imputation

Cannot find the paper you are looking for? You can Submit a new open access paper.