Search Results for author: Luca Di Liello

Found 7 papers, 3 papers with code

Structural Self-Supervised Objectives for Transformers

1 code implementation15 Sep 2023 Luca Di Liello

This thesis focuses on improving the pre-training of natural language models using unsupervised raw data to make them more efficient and aligned with downstream applications.

 Ranked #1 on Question Answering on TrecQA (using extra training data)

Fact Verification Language Modelling +3

Context-Aware Transformer Pre-Training for Answer Sentence Selection

no code implementations24 May 2023 Luca Di Liello, Siddhant Garg, Alessandro Moschitti

Answer Sentence Selection (AS2) is a core component for building an accurate Question Answering pipeline.

Ranked #4 on Question Answering on TrecQA (using extra training data)

Question Answering Sentence

Effective Pre-Training Objectives for Transformer-based Autoencoders

no code implementations24 Oct 2022 Luca Di Liello, Matteo Gabburo, Alessandro Moschitti

In this paper, we study trade-offs between efficiency, cost and accuracy when pre-training Transformer encoders with different pre-training objectives.

Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection

no code implementations20 May 2022 Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti

An important task for designing QA systems is answer sentence selection (AS2): selecting the sentence containing (or constituting) the answer to a question from a set of retrieved relevant documents.

Answer Selection Sentence

Paragraph-based Transformer Pre-training for Multi-Sentence Inference

1 code implementation NAACL 2022 Luca Di Liello, Siddhant Garg, Luca Soldaini, Alessandro Moschitti

Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks.

Answer Selection Fact Verification +1

Efficient pre-training objectives for Transformers

no code implementations20 Apr 2021 Luca Di Liello, Matteo Gabburo, Alessandro Moschitti

The Transformer architecture deeply changed the natural language processing, outperforming all previous state-of-the-art models.

Cannot find the paper you are looking for? You can Submit a new open access paper.