1 code implementation • 28 May 2023 • Jun-Gi Jang, Jeongyoung Lee, Yong-chan Park, U Kang
Although real-time analysis is necessary in the dual-way streaming, static PARAFAC2 decomposition methods fail to efficiently work in this setting since they perform PARAFAC2 decomposition for accumulated tensors whenever new data arrive.
1 code implementation • 17 Dec 2022 • Jun-Gi Jang, Sooyeon Shim, Vladimir Egay, Jeeyong Lee, Jongmin Park, Suhyun Chae, U Kang
How can we accurately identify new memory workloads while classifying known memory workloads?
1 code implementation • 19 Oct 2022 • Hyunsik Jeon, Jun-Gi Jang, Taehun Kim, U Kang
BundleMage effectively mixes user preferences of items and bundles using an adaptive gate technique to achieve high accuracy for the bundle matching.
no code implementations • 24 Mar 2022 • Jun-Gi Jang, U Kang
In this paper, we propose DPar2, a fast and scalable PARAFAC2 decomposition method for irregular dense tensors.
no code implementations • 16 Dec 2020 • Dawon Ahn, Jun-Gi Jang, U Kang
The essential problems of how to exploit the temporal property for tensor decomposition and consider the sparsity of time slices remain unresolved.
no code implementations • 28 Aug 2020 • Yong-chan Park, Jun-Gi Jang, U Kang
In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed.
no code implementations • 25 Sep 2019 • Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang
A promising direction is based on depthwise separable convolution which replaces a standard convolution with a depthwise convolution and a pointwise convolution.
no code implementations • 25 Sep 2019 • Jun-Gi Jang, Chun Quan, Hyun Dong Lee, U Kang
By exploiting the knowledge of a trained standard model and carefully determining the order of depthwise separable convolution via GEP, FALCON achieves sufficient accuracy close to that of the trained standard model.
1 code implementation • 4 Apr 2019 • Moonjeong Park, Jun-Gi Jang, Sael Lee
Given a large tensor, how can we decompose it to sparse core tensor and factor matrices such that it is easier to interpret the results?
Numerical Analysis