Search Results for author: Jun-Gi Jang

Found 9 papers, 4 papers with code

Fast and Accurate Dual-Way Streaming PARAFAC2 for Irregular Tensors -- Algorithm and Application

1 code implementation28 May 2023 Jun-Gi Jang, Jeongyoung Lee, Yong-chan Park, U Kang

Although real-time analysis is necessary in the dual-way streaming, static PARAFAC2 decomposition methods fail to efficiently work in this setting since they perform PARAFAC2 decomposition for accumulated tensors whenever new data arrive.

Accurate Open-set Recognition for Memory Workload

1 code implementation17 Dec 2022 Jun-Gi Jang, Sooyeon Shim, Vladimir Egay, Jeeyong Lee, Jongmin Park, Suhyun Chae, U Kang

How can we accurately identify new memory workloads while classifying known memory workloads?

Open Set Learning

Accurate Bundle Matching and Generation via Multitask Learning with Partially Shared Parameters

1 code implementation19 Oct 2022 Hyunsik Jeon, Jun-Gi Jang, Taehun Kim, U Kang

BundleMage effectively mixes user preferences of items and bundles using an adaptive gate technique to achieve high accuracy for the bundle matching.

Multi-Task Learning

DPar2: Fast and Scalable PARAFAC2 Decomposition for Irregular Dense Tensors

no code implementations24 Mar 2022 Jun-Gi Jang, U Kang

In this paper, we propose DPar2, a fast and scalable PARAFAC2 decomposition method for irregular dense tensors.

Time-Aware Tensor Decomposition for Missing Entry Prediction

no code implementations16 Dec 2020 Dawon Ahn, Jun-Gi Jang, U Kang

The essential problems of how to exploit the temporal property for tensor decomposition and consider the sparsity of time slices remain unresolved.

Tensor Decomposition

Fast Partial Fourier Transform

no code implementations28 Aug 2020 Yong-chan Park, Jun-Gi Jang, U Kang

In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed.

Time Series Time Series Analysis

FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN

no code implementations25 Sep 2019 Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang

A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution.

FALCON: Lightweight and Accurate Convolution

no code implementations25 Sep 2019 Jun-Gi Jang, Chun Quan, Hyun Dong Lee, U Kang

By exploiting the knowledge of a trained standard model and carefully determining the order of depthwise separable convolution via GEP, FALCON achieves sufficient accuracy close to that of the trained standard model.

Tensor Decomposition

VeST: Very Sparse Tucker Factorization of Large-Scale Tensors

1 code implementation4 Apr 2019 Moonjeong Park, Jun-Gi Jang, Sael Lee

Given a large tensor, how can we decompose it to sparse core tensor and factor matrices such that it is easier to interpret the results?

Numerical Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.