Passage Re-ranking with BERT

13 Jan 2019  ·  Rodrigo Nogueira, Kyunghyun Cho ·

Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based passage re-ranking. Our system is the state of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27% (relative) in MRR@10. The code to reproduce our results is available at https://github.com/nyu-dl/dl4marco-bert

PDF Abstract

Datasets


Results from the Paper


Ranked #3 on Passage Re-Ranking on MS MARCO (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Passage Re-Ranking MS MARCO BERT + Small Training MRR 0.359 # 3

Methods