no code implementations • 30 May 2024 • Suyeon Kim, Dongha Lee, SeongKu Kang, Sukang Chae, Sanghwan Jang, Hwanjo Yu
In this paper, we propose DynaCor framework that distinguishes incorrectly labeled instances from correctly labeled ones based on the dynamics of the training signals.
no code implementations • 29 May 2024 • Gyuseok Lee, SeongKu Kang, Wonbin Kweon, Hwanjo Yu
We expect this research direction to contribute to narrowing the gap between existing KD studies and practical applications, thereby enhancing the applicability of KD in real-world systems.
no code implementations • 26 Mar 2024 • Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu
Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.
1 code implementation • 15 Mar 2024 • Seonghyeon Lee, Sanghwan Jang, Seongbo Jang, Dongha Lee, Hwanjo Yu
However, our analysis also reveals the model's underutilized behavior to call the auxiliary function, suggesting the future direction to enhance their implementation by eliciting the auxiliary function call ability encoded in the models.
1 code implementation • 14 Mar 2024 • Joonwon Jang, Sanghwan Jang, Wonbin Kweon, Minjin Jeon, Hwanjo Yu
However, LLMs often rely on their pre-trained semantic priors of demonstrations rather than on the input-label relationships to proceed with ICL prediction.
no code implementations • 7 Mar 2024 • Jaehyun Lee, Seonku Kang, Hwanjo Yu
We introduce a new method, called ROGMC, to leverage Rating Ordinality in GNN-based Matrix Completion.
1 code implementation • 7 Mar 2024 • SeongKu Kang, Shivam Agarwal, Bowen Jin, Dongha Lee, Hwanjo Yu, Jiawei Han
Document retrieval has greatly benefited from the advancements of large-scale pre-trained language models (PLMs).
1 code implementation • 27 Feb 2024 • Seongbo Jang, Seonghyeon Lee, Hwanjo Yu
As language models are often deployed as chatbot assistants, it becomes a virtue for models to engage in conversations in a user's first language.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, Hwanjo Yu
On this basis, we propose a Doubly Calibrated Estimator that involves the calibration of both the imputation and propensity models.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Sanghwan Jang, Hwanjo Yu
To address this issue, we introduce Top-Personalized-K Recommendation, a new recommendation task aimed at generating a personalized-sized ranking list to maximize individual user satisfaction.
1 code implementation • 26 Feb 2024 • Wonbin Kweon, SeongKu Kang, Junyoung Hwang, Hwanjo Yu
Recent recommender systems started to use rating elicitation, which asks new users to rate a small seed itemset for inferring their preferences, to improve the quality of initial recommendations.
1 code implementation • 2 Mar 2023 • SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu
Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.
1 code implementation • 27 Feb 2023 • Su Kim, Dongha Lee, SeongKu Kang, Seonghyeon Lee, Hwanjo Yu
In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics.
no code implementations • 28 Jan 2023 • Junsu Cho, Dongmin Hyun, Dong won Lim, Hyeon jae Cheon, Hyoung-iel Park, Hwanjo Yu
To this end, we first address the characteristics of multi-behavior sequences that should be considered in SRSs, and then propose novel methods for Dynamic Multi-behavior Sequence modeling named DyMuS, which is a light version, and DyMuS+, which is an improved version, considering the characteristics.
1 code implementation • 21 Dec 2022 • Dongmin Hyun, Xiting Wang, Chanyoung Park, Xing Xie, Hwanjo Yu
We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality.
no code implementations • 18 Oct 2022 • Dongha Lee, Jiaming Shen, Seonghyeon Lee, Susik Yoon, Hwanjo Yu, Jiawei Han
Topic taxonomies display hierarchical topic structures of a text corpus and provide topical knowledge to enhance various NLP applications.
1 code implementation • 14 Sep 2022 • Dongmin Hyun, Chanyoung Park, Junsu Cho, Hwanjo Yu
We first formulate a task that requires to predict which items each user will consume in the recent period of the training time based on users' consumption history.
Ranked #1 on Sequential Recommendation on Amazon Cell Phones
1 code implementation • ACL 2022 • Seonghyeon Lee, Dongha Lee, Seongbo Jang, Hwanjo Yu
In the end, we propose CLRCMD, a contrastive learning framework that optimizes RCMD of sentence pairs, which enhances the quality of sentence similarity and their interpretation.
1 code implementation • 26 Feb 2022 • SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu
ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.
no code implementations • 18 Jan 2022 • Dongha Lee, Jiaming Shen, SeongKu Kang, Susik Yoon, Jiawei Han, Hwanjo Yu
Topic taxonomies, which represent the latent topic (or category) structure of document collections, provide valuable knowledge of contents in many applications such as web search and information filtering.
1 code implementation • 9 Dec 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Extensive evaluations with various personalized ranking models on real-world datasets show that both the proposed calibration methods and the unbiased empirical risk minimization significantly improve the calibration performance.
no code implementations • 24 Nov 2021 • Dongha Lee, Dongmin Hyun, Jiawei Han, Hwanjo Yu
To address this challenge, we introduce a new task referred to as out-of-category detection, which aims to distinguish the documents according to their semantic relevance to the inlier (or target) categories by using the category names as weak supervision.
no code implementations • 22 Nov 2021 • Dongha Lee, Su Kim, Seonghyeon Lee, Chanyoung Park, Hwanjo Yu
By the help of a global readout operation that simply aggregates all node (or node-cluster) representations, existing GNN classifiers obtain a graph-level representation of an input graph and predict its class label using the representation.
1 code implementation • ICCV 2021 • Dongha Lee, Sehun Yu, Hyunjun Ju, Hwanjo Yu
Most recent studies on detecting and localizing temporal anomalies have mainly employed deep neural networks to learn the normal patterns of temporal data in an unsupervised manner.
no code implementations • ACL 2021 • Seonghyeon Lee, Dongha Lee, Hwanjo Yu
Recent studies on neural networks with pre-trained weights (i. e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.
1 code implementation • 8 Jul 2021 • Junsu Cho, SeongKu Kang, Dongmin Hyun, Hwanjo Yu
Session-based Recommender Systems (SRSs) have been actively developed to recommend the next item of an anonymous short item sequence (i. e., session).
no code implementations • 16 Jun 2021 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
To address this issue, we propose a novel method named Hierarchical Topology Distillation (HTD) which distills the topology hierarchically to cope with the large capacity gap.
1 code implementation • 5 Jun 2021 • Wonbin Kweon, SeongKu Kang, Hwanjo Yu
Recommender systems (RS) have started to employ knowledge distillation, which is a model compression technique training a compact model (student) with the knowledge transferred from a cumbersome model (teacher).
1 code implementation • 14 May 2021 • Seonghyeon Lee, Dongha Lee, Hwanjo Yu
Recent studies on neural networks with pre-trained weights (i. e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.
no code implementations • 13 May 2021 • Dongha Lee, SeongKu Kang, Hyunjun Ju, Chanyoung Park, Hwanjo Yu
To make the representations of positively-related users and items similar to each other while avoiding a collapsed solution, BUIR adopts two distinct encoder networks that learn from each other; the first encoder is trained to predict the output of the second encoder as its target, while the second encoder provides the consistent targets by slowly approximating the first encoder.
1 code implementation • 29 Apr 2021 • Junsu Cho, Dongmin Hyun, SeongKu Kang, Hwanjo Yu
Existing studies regard the time information as a single type of feature and focus on how to associate it with user preferences on items.
1 code implementation • 2 Apr 2021 • Dongha Lee, Seonghyeon Lee, Hwanjo Yu
With the increase of available time series data, predicting their class labels has been one of the most important challenges in a wide range of disciplines.
1 code implementation • 2 Apr 2021 • Dongha Lee, Sehun Yu, Hwanjo Yu
The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications.
no code implementations • 1 Jan 2021 • Hyunjun Ju, Dongha Lee, SeongKu Kang, Hwanjo Yu
Recent studies on one-class classification have achieved a remarkable performance, by employing the self-supervised classifier that predicts the geometric transformation applied to in-class images.
2 code implementations • 8 Dec 2020 • SeongKu Kang, Junyoung Hwang, Wonbin Kweon, Hwanjo Yu
Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance.
1 code implementation • COLING 2020 • Dongmin Hyun, Junsu Cho, Hwanjo Yu
We release large-scale datasets of users{'} comments in two languages, English and Korean, for aspect-level sentiment analysis in automotive domain.
1 code implementation • Conference 2020 • Dongmin Hyun, Junsu Cho, Chanyoung Park, Hwanjo Yu
More precisely, we first predict the interest sustainability of each item, that is, how likely each item will be consumed in the future.
no code implementations • 7 Sep 2020 • Beomjo Shin, Junsu Cho, Hwanjo Yu, Seungjin Choi
Since a positive bag contains both positive and negative instances, it is often required to detect positive instances (key instances) when a set of instances is categorized as a positive bag.
1 code implementation • 7 Jun 2020 • Chanyoung Park, Carl Yang, Qi Zhu, Donghyun Kim, Hwanjo Yu, Jiawei Han
To capture the multiple aspects of each node, existing studies mainly rely on offline graph clustering performed prior to the actual embedding, which results in the cluster membership of each node (i. e., node aspect distribution) fixed throughout training of the embedding model.
no code implementations • 26 Nov 2019 • Seonghyeon Lee, Chanyoung Park, Hwanjo Yu
We view the heterogeneous network embedding as simultaneously solving multiple tasks in which each task corresponds to each relation type in a network.
2 code implementations • 15 Nov 2019 • Chanyoung Park, Donghyun Kim, Jiawei Han, Hwanjo Yu
Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph.
no code implementations • 25 Sep 2019 • Gyoung S. Na, Dongmin Hyeon, Hwanjo Yu
This paper proposes a new approach for step size adaptation in gradient methods.
no code implementations • 25 Sep 2019 • Dongha Lee, Sehun Yu, Hwanjo Yu
The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications.
no code implementations • 25 Sep 2019 • Sehun Yu, Donga Lee, Hwanjo Yu
Inspired by the method using the global average pooling on the feature maps of the convolutional neural networks, the goal of our method is to extract informative sequential patterns from the feature maps.
1 code implementation • 4 Jun 2019 • Chanyoung Park, Donghyun Kim, Qi Zhu, Jiawei Han, Hwanjo Yu
In this paper, we propose a novel task-guided pair embedding framework in heterogeneous network, called TaPEm, that directly models the relationship between a pair of nodes that are related to a specific task (e. g., paper-author relationship in author identification).
1 code implementation • 4 Jun 2019 • Chanyoung Park, Donghyun Kim, Xing Xie, Hwanjo Yu
We also conduct extensive qualitative evaluations on the translation vectors learned by our proposed method to ascertain the benefit of adopting the translation mechanism for implicit feedback-based recommendations.
Ranked #1 on Recommendation Systems on Declicious
no code implementations • EMNLP 2018 • Seonghan Ryu, Sangjun Koo, Hwanjo Yu, Gary Geunbae Lee
The main goal of this paper is to develop out-of-domain (OOD) detection for dialog systems.
Generative Adversarial Network Out of Distribution (OOD) Detection +1
2 code implementations • 27 Jul 2018 • Seonghan Ryu, Seokhwan Kim, Junhwi Choi, Hwanjo Yu, Gary Geunbae Lee
Then we used domain-category analysis as an auxiliary task to train neural sentence embedding for OOD sentence detection.
no code implementations • 21 Jun 2017 • Chanyoung Park, Donghyun Kim, Min-Chul Yang, Jung-Tae Lee, Hwanjo Yu
We begin by formulating various model assumptions, each one assuming a different order of user preferences among purchased, clicked-but-not-purchased, and non-clicked items, to study the usefulness of leveraging click records.
no code implementations • 11 Apr 2017 • Yejin Kim, Jimeng Sun, Hwanjo Yu, Xiaoqian Jiang
In this paper, we developed a novel solution to enable federated tensor factorization for computational phenotyping without sharing patient-level data.
no code implementations • 25 May 2016 • Byung-soo Kim, Hwanjo Yu, Gary Geunbae Lee
To the best of our knowledge, this is the first work to apply deep learning to Open IE.
1 code implementation • LREC 2016 • Muhammad Humayoun, Hwanjo Yu
Preprocessing is a preliminary step in many fields including IR and NLP.