Learning effective feature crosses is the key behind building recommender systems.
CLICK-THROUGH RATE PREDICTION LEARNING-TO-RANK RECOMMENDATION SYSTEMS
Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting.
ACTIVE LEARNING CROWD COUNTING IMAGE QUALITY ASSESSMENT LEARNING-TO-RANK SELF-SUPERVISED LEARNING
Most of the proposed person re-identification algorithms conduct supervised training and testing on single labeled datasets with small size, so directly deploying these trained models to a large-scale real-world camera network may lead to poor performance due to underfitting.
INCREMENTAL LEARNING LEARNING-TO-RANK TRANSFER LEARNING UNSUPERVISED PERSON RE-IDENTIFICATION
In learning to rank, one is interested in optimising the global ordering of a list of items according to their utility for users. Popular approaches learn a scoring function that scores items individually (i. e. without the context of other items in the list) by optimising a pointwise, pairwise or listwise loss.
In this work, we propose PT-Ranking, an open-source project based on PyTorch for developing and evaluating learning-to-rank methods using deep neural networks as the basis to construct a scoring function.
Given a query and a set of documents, K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score.
AD-HOC INFORMATION RETRIEVAL DOCUMENT RANKING LEARNING-TO-RANK WORD EMBEDDINGS
Although click data is widely used in search systems in practice, so far the inherent bias, most notably position bias, has prevented it from being used in training of a ranker for search, i. e., learning-to-rank.
In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.
We introduce a novel latent vector space model that jointly learns the latent representations of words, e-commerce products and a mapping between the two without the need for explicit annotations.