Search Results for author: Junyoung Hwang

Found 5 papers, 3 papers with code

Multi-Domain Recommendation to Attract Users via Domain Preference Modeling

no code implementations26 Mar 2024 Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu

Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.

Deep Rating Elicitation for New Users in Collaborative Filtering

1 code implementation26 Feb 2024 Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu

Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.

Collaborative Filtering Recommendation Systems

Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering

1 code implementation26 Feb 2022 SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu

ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.

Collaborative Filtering

Topology Distillation for Recommender System

no code implementations16 Jun 2021 SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu

To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.

Knowledge Distillation Model Compression +1

DE-RRD: A Knowledge Distillation Framework for Recommender System

2 code implementations8 Dec 2020 SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu

Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.

Knowledge Distillation Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.