Wasserstein Autoencoders for Collaborative Filtering

15 Sep 2018  ·  Jingbin Zhong, Xiaofeng Zhang ·

The recommender systems have long been investigated in the literature. Recently, users' implicit feedback like `click' or `browse' are considered to be able to enhance the recommendation performance. Therefore, a number of attempts have been made to resolve this issue. Among them, the variational autoencoders (VAE) approach already achieves a superior performance. However, the distributions of the encoded latent variables overlap a lot which may restrict its recommendation ability. To cope with this challenge, this paper tries to extend the Wasserstein autoencoders (WAE) for collaborative filtering. Particularly, the loss function of the adapted WAE is re-designed by introducing two additional loss terms: (1) the mutual information loss between the distribution of latent variables and the assumed ground truth distribution, and (2) the L1 regularization loss introduced to restrict the encoded latent variables to be sparse. Two different cost functions are designed for measuring the distance between the implicit feedback data and its re-generated version of data. Experiments are valuated on three widely adopted data sets, i.e., ML-20M, Netflix and LASTFM. Both the baseline and the state-of-the-art approaches are chosen for the performance comparison which are Mult-DAE, Mult-VAE, CDAE and Slim. The performance of the proposed approach outperforms the compared methods with respect to evaluation criteria Recall@1, Recall@5 and NDCG@10, and this demonstrates the efficacy of the proposed approach.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods