Search Results for author: David Rau

Found 4 papers, 1 papers with code

The Role of Complex NLP in Transformers for Text Ranking?

no code implementations6 Jul 2022 David Rau, Jaap Kamps

Even though term-based methods such as BM25 provide strong baselines in ranking, under certain conditions they are dominated by large pre-trained masked language models (MLMs) such as BERT.

Position Re-Ranking

How Different are Pre-trained Transformers for Text Ranking?

1 code implementation5 Apr 2022 David Rau, Jaap Kamps

Our results contribute to our understanding of (black-box) neural rankers relative to (well-understood) traditional rankers, help understand the particular experimental setting of MS-Marco-based test collections.

Passage Retrieval Retrieval

On the Realization of Compositionality in Neural Networks

no code implementations WS 2019 Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni

We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.

Point-less: More Abstractive Summarization with Pointer-Generator Networks

no code implementations18 Apr 2019 Freek Boutkan, Jorn Ranzijn, David Rau, Eelco van der Wel

The Pointer-Generator architecture has shown to be a big improvement for abstractive summarization seq2seq models.

Abstractive Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.