Learning-To-Rank
178 papers with code • 0 benchmarks • 9 datasets
Learning to rank is the application of machine learning to build ranking models. Some common use cases for ranking models are information retrieval (e.g., web search) and news feeds application (think Twitter, Facebook, Instagram).
Benchmarks
These leaderboards are used to track progress in Learning-To-Rank
Libraries
Use these libraries to find Learning-To-Rank models and implementationsDatasets
Latest papers
A Learning-to-Rank Formulation of Clustering-Based Approximate Nearest Neighbor Search
Its objective is to return a set of $k$ data points that are closest to a query point, with its accuracy measured by the proportion of exact nearest neighbors captured in the returned set.
Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
Unbiased Learning to Rank Meets Reality: Lessons from Baidu's Large-Scale Search Dataset
However, these gains in click prediction do not translate to enhanced ranking performance on expert relevance annotations, implying that conclusions strongly depend on how success is measured in this benchmark.
Learning to Rank Patches for Unbiased Image Redundancy Reduction
The results demonstrate that LTRP outperforms both supervised and other self-supervised methods due to the fair assessment of image content.
RankingSHAP -- Listwise Feature Attribution Explanations for Ranking Models
We evaluate RankingSHAP for commonly used learning-to-rank datasets to showcase the more nuanced use of an attribution method while highlighting the limitations of selection-based explanations.
Metasql: A Generate-then-Rank Framework for Natural Language to SQL Translation
While these translation models have greatly improved the overall translation accuracy, surpassing 70% on NLIDB benchmarks, the use of auto-regressive decoding to generate single SQL queries may result in sub-optimal outputs, potentially leading to erroneous translations.
Explain then Rank: Scale Calibration of Neural Rankers Using Natural Language Explanations from Large Language Models
The process of scale calibration in ranking systems involves adjusting the outputs of rankers to correspond with significant qualities like click-through rates or relevance, crucial for mirroring real-world value and thereby boosting the system's effectiveness and reliability.
List-aware Reranking-Truncation Joint Model for Search and Retrieval-augmented Generation
First, it is hard to share the contextual information of the ranking list between the two tasks.
How to Forget Clients in Federated Online Learning to Rank?
In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model.
Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine
Therefore, we focus on the task of retrieving target objects from open-vocabulary user instructions in a human-in-the-loop setting, which we define as the learning-to-rank physical objects (LTRPO) task.