Wide&Deep jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for real-world recommender systems. In summary, the wide component is a generalized linear model. The deep component is a feed-forward neural network. The deep and wide components are combined using a weighted sum of their output log odds as the prediction. This is then fed to a logistic loss function for joint training, which is done by back-propagating the gradients from the output to both the wide and deep part of the model simultaneously using mini-batch stochastic optimization. The AdaGrad optimizer is used for the wider part. The combined model is illustrated in the figure (center).
Source: Wide & Deep Learning for Recommender SystemsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Click-Through Rate Prediction | 4 | 40.00% |
Recommendation Systems | 3 | 30.00% |
Link Prediction | 1 | 10.00% |
Feature Engineering | 1 | 10.00% |
Memorization | 1 | 10.00% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |