Bert4XMR: Cross-Market Recommendation with Bidirectional Encoder Representations from Transformer

24 May 2023  ·  Zheng Hu, Satoshi Nakagawa, Shi-Min Cai, Fuji Ren ·

Real-world multinational e-commerce companies, such as Amazon and eBay, serve in multiple countries and regions. Some markets are data-scarce, while others are data-rich. In recent years, cross-market recommendation (XMR) has been proposed to bolster data-scarce markets by leveraging auxiliary information from data-rich markets. Previous XMR algorithms have employed techniques such as sharing bottom or incorporating inter-market similarity to optimize the performance of XMR. However, the existing approaches suffer from two crucial limitations: (1) They ignore the co-occurrences of items provided by data-rich markets. (2) They do not adequately tackle the issue of negative transfer stemming from disparities across diverse markets. In order to address these limitations, we propose a novel session-based model called Bert4XMR, which is able to model item co-occurrences across markets and mitigate negative transfer. Specifically, we employ the pre-training and fine-tuning paradigm to facilitate knowledge transfer across markets. Pre-training occurs on global markets to learn item co-occurrences, while fine-tuning happens in the target market for model customization. To mitigate potential negative transfer, we separate the item representations into market embeddings and item embeddings. Market embeddings model the bias associated with different markets, while item embeddings learn generic item representations. Extensive experiments conducted on seven real-world datasets illustrate our model's effectiveness. It outperforms the suboptimal model by an average of $4.82\%$, $4.73\%$, $7.66\%$, and $6.49\%$ across four metrics. Through the ablation study, we experimentally demonstrate that the market embedding approach helps prevent negative transfer, especially in data-scarce markets. Our implementations are available at https://github.com/laowangzi/Bert4XMR.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here