Search Results for author: Jin-Duk Park

Found 7 papers, 4 papers with code

Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation

1 code implementation22 Apr 2024 Jin-Duk Park, Yong-Min Shin, Won-Yong Shin

In this paper, we propose Turbo-CF, a GF-based CF method that is both training-free and matrix decomposition-free.

Collaborative Filtering

Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity

1 code implementation22 Apr 2024 Yu Hou, Jin-Duk Park, Won-Yong Shin

A recent study has shown that diffusion models are well-suited for modeling the generative process of user-item interactions in recommender systems due to their denoising nature.

Collaborative Filtering Computational Efficiency +2

Node Feature Augmentation Vitaminizes Network Alignment

no code implementations25 Apr 2023 Jin-Duk Park, Cong Tran, Won-Yong Shin, Xin Cao

Network alignment (NA) is the task of discovering node correspondences across multiple networks.

Computational Efficiency

Grad-Align+: Empowering Gradual Network Alignment Using Attribute Augmentation

no code implementations23 Aug 2022 Jin-Duk Park, Cong Tran, Won-Yong Shin, Xin Cao

Network alignment (NA) is the task of discovering node correspondences across different networks.

Attribute

Two-Stage Deep Anomaly Detection with Heterogeneous Time Series Data

no code implementations10 Feb 2022 Kyeong-Joong Jeong, Jin-Duk Park, Kyusoon Hwang, Seong-Lyun Kim, Won-Yong Shin

We introduce a data-driven anomaly detection framework using a manufacturing dataset collected from a factory assembly line.

Anomaly Detection Time Series +2

On the Power of Gradual Network Alignment Using Dual-Perception Similarities

1 code implementation26 Jan 2022 Jin-Duk Park, Cong Tran, Won-Yong Shin, Xin Cao

Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.

Cannot find the paper you are looking for? You can Submit a new open access paper.