Click-Through Rate Prediction
133 papers with code • 19 benchmarks • 6 datasets
Click-through rate prediction is the task of predicting the likelihood that something on a website (such as an advertisement) will be clicked.
( Image credit: Deep Spatio-Temporal Neural Networks for Click-Through Rate Prediction )
Libraries
Use these libraries to find Click-Through Rate Prediction models and implementationsLatest papers with no code
PPM : A Pre-trained Plug-in Model for Click-through Rate Prediction
However, the explosive growth of online latency can be attributed to the huge parameters in the pre-trained model.
MetaSplit: Meta-Split Network for Limited-Stock Product Recommendation
Due to limited user interactions for each product (i. e. item), the corresponding item embedding in the CTR model may not easily converge.
Improved Online Learning Algorithms for CTR Prediction in Ad Auctions
In this work, we investigate the online learning problem of revenue maximization in ad auctions, where the seller needs to learn the click-through rates (CTRs) of each ad candidate and charge the price of the winner through a pay-per-click manner.
LiRank: Industrial Large Scale Ranking Models at LinkedIn
We present LiRank, a large-scale ranking framework at LinkedIn that brings to production state-of-the-art modeling architectures and optimization methods.
Calibration-then-Calculation: A Variance Reduced Metric Framework in Deep Click-Through Rate Prediction Models
The metric variance comes from the randomness inherent in the training process of deep learning pipelines.
GACE: Learning Graph-Based Cross-Page Ads Embedding For Click-Through Rate Prediction
In this paper, we proposed GACE, a graph-based cross-page ads embedding generation method.
Deep Evolutional Instant Interest Network for CTR Prediction in Trigger-Induced Recommendation
Recently, a new recommendation scenario, called Trigger-Induced Recommendation (TIR), where users are able to explicitly express their instant interests via trigger items, is emerging as an essential role in many e-commerce platforms, e. g., Alibaba. com and Amazon.
Fine-Grained Embedding Dimension Optimization During Training for Recommender Systems
Huge embedding tables in modern Deep Learning Recommender Models (DLRM) require prohibitively large memory during training and inference.
Less or More From Teacher: Exploiting Trilateral Geometry For Knowledge Distillation
A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio in a bilevel-optimization manner.
Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction
A typical practice is privileged features distillation (PFD): train a teacher model using all features (including privileged ones) and then distill the knowledge from the teacher model using a student model (excluding the privileged features), which is then employed for online serving.