no code implementations • 6 Jul 2022 • David Rau, Jaap Kamps
Even though term-based methods such as BM25 provide strong baselines in ranking, under certain conditions they are dominated by large pre-trained masked language models (MLMs) such as BERT.
1 code implementation • 5 Apr 2022 • David Rau, Jaap Kamps
Our results contribute to our understanding of (black-box) neural rankers relative to (well-understood) traditional rankers, help understand the particular experimental setting of MS-Marco-based test collections.
no code implementations • WS 2019 • Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni
We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.
no code implementations • 18 Apr 2019 • Freek Boutkan, Jorn Ranzijn, David Rau, Eelco van der Wel
The Pointer-Generator architecture has shown to be a big improvement for abstractive summarization seq2seq models.