Search Results for author: Dongha Lee

Found 34 papers, 13 papers with code

Multi-Domain Recommendation to Attract Users via Domain Preference Modeling

no code implementations26 Mar 2024 Hyuunjun Ju, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, Hwanjo Yu

Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains.

Exploring Language Model's Code Generation Ability with Auxiliary Functions

1 code implementation15 Mar 2024 Seonghyeon Lee, Sanghwan Jang, Seongbo Jang, Dongha Lee, Hwanjo Yu

However, our analysis also reveals the model's underutilized behavior to call the auxiliary function, suggesting the future direction to enhance their implementation by eliciting the auxiliary function call ability encoded in the models.

Code Generation

Pearl: A Review-driven Persona-Knowledge Grounded Conversational Recommendation Dataset

no code implementations7 Mar 2024 Minjin Kim, Minju Kim, Hana Kim, Beong-woo Kwak, Soyeon Chun, Hyunseo Kim, SeongKu Kang, Youngjae Yu, Jinyoung Yeo, Dongha Lee

Our experimental results demonstrate that utterances in PEARL include more specific user preferences, show expertise in the target domain, and provide recommendations more relevant to the dialogue context than those in prior datasets.

Recommendation Systems

Improving Retrieval in Theme-specific Applications using a Corpus Topical Taxonomy

1 code implementation7 Mar 2024 SeongKu Kang, Shivam Agarwal, Bowen Jin, Dongha Lee, Hwanjo Yu, Jiawei Han

Document retrieval has greatly benefited from the advancements of large-scale pre-trained language models (PLMs).

Retrieval

Evidence-Focused Fact Summarization for Knowledge-Augmented Zero-Shot Question Answering

no code implementations5 Mar 2024 Sungho Ko, Hyunjin Cho, Hyungjoo Chae, Jinyoung Yeo, Dongha Lee

Recent studies have investigated utilizing Knowledge Graphs (KGs) to enhance Quesetion Answering (QA) performance of Large Language Models (LLMs), yet structured KG verbalization remains challengin.

Knowledge Graphs Question Answering

Ever-Evolving Memory by Blending and Refining the Past

no code implementations3 Mar 2024 Seo Hyun Kim, Keummin Ka, Yohan Jo, Seung-won Hwang, Dongha Lee, Jinyoung Yeo

To effectively construct memory, it is crucial to seamlessly connect past and present information, while also possessing the ability to forget obstructive information.

Chatbot Response Generation

Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with Extract-Then-Assign Strategy

no code implementations1 Mar 2024 Jieyong Kim, Ryang Heo, Yongsik Seo, SeongKu Kang, Jinyoung Yeo, Dongha Lee

In the task of aspect sentiment quad prediction (ASQP), generative methods for predicting sentiment quads have shown promising results.

Can Large Language Models be Good Emotional Supporter? Mitigating Preference Bias on Emotional Support Conversation

no code implementations20 Feb 2024 Dongjin Kang, Sunghwan Kim, Taeyoon Kwon, Seungjun Moon, Hyunsouk Cho, Youngjae Yu, Dongha Lee, Jinyoung Yeo

Motivated by these, we explore the impact of the inherent preference in LLMs on providing emotional support, and consequently, we observe that exhibiting high preference for specific strategies hinders effective emotional support, aggravating its robustness in predicting the appropriate strategy.

Emotional Intelligence

Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement

no code implementations25 Jan 2024 Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo

As the pioneer of persona expansion in multi-session settings, our framework facilitates better response generation via human-like persona refinement.

Management Response Generation

RDGCL: Reaction-Diffusion Graph Contrastive Learning for Recommendation

no code implementations27 Dec 2023 Jeongwhan Choi, Hyowon Wi, Chaejeong Lee, Sung-Bae Cho, Dongha Lee, Noseong Park

Contrastive learning (CL) has emerged as a promising technique for improving recommender systems, addressing the challenge of data sparsity by leveraging self-supervised signals from raw data.

Contrastive Learning Data Integration +1

Large Language Models are Clinical Reasoners: Reasoning-Aware Diagnosis Framework with Prompt-Generated Rationales

no code implementations12 Dec 2023 Taeyoon Kwon, Kai Tzu-iunn Ong, Dongjin Kang, Seungjun Moon, Jeong Ryong Lee, Dosik Hwang, Yongsik Sim, Beomseok Sohn, Dongha Lee, Jinyoung Yeo

Specifically, we address the clinical reasoning for disease diagnosis, where the LLM generates diagnostic rationales providing its insight on presented patient data and the reasoning path towards the diagnosis, namely Clinical Chain-of-Thought (Clinical CoT).

Reading Comprehension

SCStory: Self-supervised and Continual Online Story Discovery

1 code implementation27 Nov 2023 Susik Yoon, Yu Meng, Dongha Lee, Jiawei Han

With a lightweight hierarchical embedding module that first learns sentence representations and then article representations, SCStory identifies story-relevant information of news articles and uses them to discover stories.

Continual Learning Contrastive Learning +1

RTSUM: Relation Triple-based Interpretable Summarization with Multi-level Salience Visualization

2 code implementations21 Oct 2023 Seonglae Cho, Yonggi Cho, HoonJae Lee, Myungha Jang, Jinyoung Yeo, Dongha Lee

In this paper, we present RTSUM, an unsupervised summarization framework that utilizes relation triples as the basic unit for summarization.

Language Modelling Relation

Unsupervised Story Discovery from Continuous News Streams via Scalable Thematic Embedding

1 code implementation8 Apr 2023 Susik Yoon, Dongha Lee, Yunyi Zhang, Jiawei Han

Unsupervised discovery of stories with correlated news articles in real-time helps people digest massive news streams without expensive human annotations.

Sentence

Evidentiality-aware Retrieval for Overcoming Abstractiveness in Open-Domain Question Answering

no code implementations6 Apr 2023 Yongho Song, Dahyun Lee, Myungha Jang, Seung-won Hwang, Kyungjae Lee, Dongha Lee, Jinyeong Yeo

The long-standing goal of dense retrievers in abtractive open-domain question answering (ODQA) tasks is to learn to capture evidence passages among relevant passages for any given query, such that the reader produce factually correct outputs from evidence passages.

Contrastive Learning counterfactual +4

Distillation from Heterogeneous Models for Top-K Recommendation

1 code implementation2 Mar 2023 SeongKu Kang, Wonbin Kweon, Dongha Lee, Jianxun Lian, Xing Xie, Hwanjo Yu

Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy.

Knowledge Distillation Recommendation Systems +1

Learning Topology-Specific Experts for Molecular Property Prediction

1 code implementation27 Feb 2023 Su Kim, Dongha Lee, SeongKu Kang, Seonghyeon Lee, Hwanjo Yu

In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics.

Molecular Property Prediction Property Prediction

Topic Taxonomy Expansion via Hierarchy-Aware Topic Phrase Generation

no code implementations18 Oct 2022 Dongha Lee, Jiaming Shen, Seonghyeon Lee, Susik Yoon, Hwanjo Yu, Jiawei Han

Topic taxonomies display hierarchical topic structures of a text corpus and provide topical knowledge to enhance various NLP applications.

Relation Taxonomy Expansion

Toward Interpretable Semantic Textual Similarity via Optimal Transport-based Contrastive Sentence Learning

1 code implementation ACL 2022 Seonghyeon Lee, Dongha Lee, Seongbo Jang, Hwanjo Yu

In the end, we propose CLRCMD, a contrastive learning framework that optimizes RCMD of sentence pairs, which enhances the quality of sentence similarity and their interpretation.

Contrastive Learning Language Modelling +5

Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering

1 code implementation26 Feb 2022 SeongKu Kang, Dongha Lee, Wonbin Kweon, Junyoung Hwang, Hwanjo Yu

ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives.

Collaborative Filtering

TaxoCom: Topic Taxonomy Completion with Hierarchical Discovery of Novel Topic Clusters

no code implementations18 Jan 2022 Dongha Lee, Jiaming Shen, SeongKu Kang, Susik Yoon, Jiawei Han, Hwanjo Yu

Topic taxonomies, which represent the latent topic (or category) structure of document collections, provide valuable knowledge of contents in many applications such as web search and information filtering.

Clustering Topic coverage

Out-of-Category Document Identification Using Target-Category Names as Weak Supervision

no code implementations24 Nov 2021 Dongha Lee, Dongmin Hyun, Jiawei Han, Hwanjo Yu

To address this challenge, we introduce a new task referred to as out-of-category detection, which aims to distinguish the documents according to their semantic relevance to the inlier (or target) categories by using the category names as weak supervision.

Learnable Structural Semantic Readout for Graph Classification

no code implementations22 Nov 2021 Dongha Lee, Su Kim, Seonghyeon Lee, Chanyoung Park, Hwanjo Yu

By the help of a global readout operation that simply aggregates all node (or node-cluster) representations, existing GNN classifiers obtain a graph-level representation of an input graph and predict its class label using the representation.

Graph Classification Position

Weakly Supervised Temporal Anomaly Segmentation with Dynamic Time Warping

1 code implementation ICCV 2021 Dongha Lee, Sehun Yu, Hyunjun Ju, Hwanjo Yu

Most recent studies on detecting and localizing temporal anomalies have mainly employed deep neural networks to learn the normal patterns of temporal data in an unsupervised manner.

Dynamic Time Warping Segmentation

OoMMix: Out-of-manifold Regularization in Contextual Embedding Space for Text Classification

no code implementations ACL 2021 Seonghyeon Lee, Dongha Lee, Hwanjo Yu

Recent studies on neural networks with pre-trained weights (i. e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.

Data Augmentation text-classification +1

Out-of-Manifold Regularization in Contextual Embedding Space for Text Classification

1 code implementation14 May 2021 Seonghyeon Lee, Dongha Lee, Hwanjo Yu

Recent studies on neural networks with pre-trained weights (i. e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.

Data Augmentation text-classification +1

Bootstrapping User and Item Representations for One-Class Collaborative Filtering

no code implementations13 May 2021 Dongha Lee, SeongKu Kang, Hyunjun Ju, Chanyoung Park, Hwanjo Yu

To make the representations of positively-related users and items similar to each other while avoiding a collapsed solution, BUIR adopts two distinct encoder networks that learn from each other; the first encoder is trained to predict the output of the second encoder as its target, while the second encoder provides the consistent targets by slowly approximating the first encoder.

Collaborative Filtering Data Augmentation

Multi-Class Data Description for Out-of-distribution Detection

1 code implementation2 Apr 2021 Dongha Lee, Sehun Yu, Hwanjo Yu

The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications.

Out-of-Distribution Detection

Learnable Dynamic Temporal Pooling for Time Series Classification

no code implementations2 Apr 2021 Dongha Lee, Seonghyeon Lee, Hwanjo Yu

With the increase of available time series data, predicting their class labels has been one of the most important challenges in a wide range of disciplines.

Classification Dynamic Time Warping +4

One-class Classification Robust to Geometric Transformation

no code implementations1 Jan 2021 Hyunjun Ju, Dongha Lee, SeongKu Kang, Hwanjo Yu

Recent studies on one-class classification have achieved a remarkable performance, by employing the self-supervised classifier that predicts the geometric transformation applied to in-class images.

Classification General Classification +2

Deep Generative Classifier for Out-of-distribution Sample Detection

no code implementations25 Sep 2019 Dongha Lee, Sehun Yu, Hwanjo Yu

The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.