Search Results for author: Robert Litschko

Found 14 papers, 9 papers with code

Establishing Trustworthiness: Rethinking Tasks and Model Evaluation

no code implementations9 Oct 2023 Robert Litschko, Max Müller-Eberstein, Rob van der Goot, Leon Weber, Barbara Plank

Language understanding is a multi-faceted cognitive capability, which the Natural Language Processing (NLP) community has striven to model computationally for decades.

Donkii: Can Annotation Error Detection Methods Find Errors in Instruction-Tuning Datasets?

1 code implementation4 Sep 2023 Leon Weber-Genzel, Robert Litschko, Ekaterina Artemova, Barbara Plank

Our results show that the choice of the right AED method and model size is indeed crucial and derive practical recommendations for how to use AED methods to clean instruction-tuning data.

Text Generation

A General-Purpose Multilingual Document Encoder

1 code implementation11 May 2023 Onur Galoğlu, Robert Litschko, Goran Glavaš

While a large body of work leveraged MMTs to mine parallel data and induce bilingual document embeddings, much less effort has been devoted to training general-purpose (massively) multilingual document encoder that can be used for both supervised and unsupervised document-level tasks.

Cross-Lingual Transfer Document Classification +3

Boosting Zero-shot Cross-lingual Retrieval by Training on Artificially Code-Switched Data

1 code implementation9 May 2023 Robert Litschko, Ekaterina Artemova, Barbara Plank

Transferring information retrieval (IR) models from a high-resource language (typically English) to other languages in a zero-shot fashion has become a widely adopted approach.

Cross-Lingual Word Embeddings Information Retrieval +2

Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval

1 code implementation COLING 2022 Robert Litschko, Ivan Vulić, Goran Glavaš

Current approaches therefore commonly transfer rankers trained on English data to other languages and cross-lingual setups by means of multilingual encoders: they fine-tune all parameters of pretrained massively multilingual Transformers (MMTs, e. g., multilingual BERT) on English relevance judgments, and then deploy them in the target language(s).

Cross-Lingual Transfer Language Modelling +3

On Cross-Lingual Retrieval with Multilingual Text Encoders

1 code implementation21 Dec 2021 Robert Litschko, Ivan Vulić, Simone Paolo Ponzetto, Goran Glavaš

In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs.

Re-Ranking Retrieval +2

Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval

1 code implementation21 Jan 2021 Robert Litschko, Ivan Vulić, Simone Paolo Ponzetto, Goran Glavaš

Therefore, in this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a large number of language pairs.

Cross-Lingual Word Embeddings Representation Learning +3

Probing Pretrained Language Models for Lexical Semantics

no code implementations EMNLP 2020 Ivan Vulić, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, Anna Korhonen

The success of large pretrained language models (LMs) such as BERT and RoBERTa has sparked interest in probing their representations, in order to unveil what types of knowledge they implicitly capture.

World Knowledge

Towards Instance-Level Parser Selection for Cross-Lingual Transfer of Dependency Parsers

no code implementations COLING 2020 Robert Litschko, Ivan Vulić, Željko Agić, Goran Glavaš

Current methods of cross-lingual parser transfer focus on predicting the best parser for a low-resource target language globally, that is, "at treebank level".

Cross-Lingual Transfer POS

Unsupervised Cross-Lingual Information Retrieval using Monolingual Data Only

1 code implementation2 May 2018 Robert Litschko, Goran Glavaš, Simone Paolo Ponzetto, Ivan Vulić

We propose a fully unsupervised framework for ad-hoc cross-lingual information retrieval (CLIR) which requires no bilingual data at all.

Cross-Lingual Information Retrieval Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.