Multimedia recommendation

10 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Multi-Modal Self-Supervised Learning for Recommendation

hkuds/mmssl 21 Feb 2023

The online emergence of multi-modal sharing platforms (eg, TikTok, Youtube) is powering personalized recommender systems to incorporate various modalities (eg, visual, textual and acoustic) into the latent user representations.

Multi-View Graph Convolutional Network for Multimedia Recommendation

enoche/mmrec 7 Aug 2023

Meanwhile, a behavior-aware fuser is designed to comprehensively model user preferences by adaptively learning the relative importance of different modality features.

MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video

weiyinwei/mmgcn ACM International Conference on Multimedia 2019

Existing works on multimedia recommendation largely exploit multi-modal contents to enrich item representations, while less effort is made to leverage information interchange between users and items to enhance user representations and further capture user's fine-grained preferences on different modalities.

ContentWise Impressions: An Industrial Dataset with Impressions Included

ContentWise/contentwise-impressions 3 Aug 2020

In this article, we introduce the ContentWise Impressions dataset, a collection of implicit interactions and impressions of movies and TV series from an Over-The-Top media service, which delivers its media contents over the Internet.

Mining Latent Structures for Multimedia Recommendation

CRIPAC-DIG/LATTICE 19 Apr 2021

To be specific, in the proposed LATTICE model, we devise a novel modality-aware structure learning layer, which learns item-item structures for each modality and aggregates multiple modalities to obtain latent item graphs.

Latent Structure Mining with Contrastive Modality Fusion for Multimedia Recommendation

cripac-dig/micro 1 Nov 2021

Although having access to multiple modalities might allow us to capture rich information, we argue that the simple coarse-grained fusion by linear combination or concatenation in previous work is insufficient to fully understand content information and item relationships. To this end, we propose a latent structure MIning with ContRastive mOdality fusion method (MICRO for brevity).

GRCN: Graph-Refined Convolutional Network for Multimedia Recommendation with Implicit Feedback

weiyinwei/grcn 3 Nov 2021

Reorganizing implicit feedback of users as a user-item interaction graph facilitates the applications of graph convolutional networks (GCNs) in recommendation tasks.

LightGT: A Light Graph Transformer for Multimedia Recommendation

Liuwq-bit/LightGT SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023

Considering its challenges in effectiveness and efficiency, we propose a novel Transformer-based recommendation model, termed as Light Graph Transformer model (LightGT).

Formalizing Multimedia Recommendation through Multimodal Deep Learning

sisinflab/formal-multimod-rec 11 Sep 2023

Recommender systems (RSs) offer personalized navigation experiences on online platforms, but recommendation remains a challenging task, particularly in specific scenarios and domains.

MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation

kimyungi/monet 15 Dec 2023

In this paper, we focus on multimedia recommender systems using graph convolutional networks (GCNs) where the multimodal features as well as user-item interactions are employed together.