Enhancing VAEs for Collaborative Filtering: Flexible Priors & Gating Mechanisms

3 Nov 2019  ·  Daeryong Kim, Bongwon Suh ·

Neural network based models for collaborative filtering have started to gain attention recently. One branch of research is based on using deep generative models to model user preferences where variational autoencoders were shown to produce state-of-the-art results. However, there are some potentially problematic characteristics of the current variational autoencoder for CF. The first is the too simplistic prior that VAEs incorporate for learning the latent representations of user preference. The other is the model's inability to learn deeper representations with more than one hidden layer for each network. Our goal is to incorporate appropriate techniques to mitigate the aforementioned problems of variational autoencoder CF and further improve the recommendation performance. Our work is the first to apply flexible priors to collaborative filtering and show that simple priors (in original VAEs) may be too restrictive to fully model user preferences and setting a more flexible prior gives significant gains. We experiment with the VampPrior, originally proposed for image generation, to examine the effect of flexible priors in CF. We also show that VampPriors coupled with gating mechanisms outperform SOTA results including the Variational Autoencoder for Collaborative Filtering by meaningful margins on 2 popular benchmark datasets (MovieLens & Netflix).

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Recommendation Systems MovieLens 20M H+Vamp Gated Recall@20 0.41308 # 4
Recall@50 0.55109 # 3
nDCG@100 0.44522 # 2
Recommendation Systems Netflix H+Vamp Gated Recall@20 0.37678 # 1
Recall@50 0.46252 # 1
nDCG@100 0.40861 # 1

Methods