Search Results for author: Elena Sofia Ruzzetti

Found 7 papers, 0 papers with code

Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL Translation

no code implementations12 Feb 2024 Federico Ranaldi, Elena Sofia Ruzzetti, Dario Onorati, Leonardo Ranaldi, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto

Our results indicate a significant performance drop in GPT-3. 5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.

Instruction Following Text-To-SQL +1

Empowering Multi-step Reasoning across Languages via Tree-of-Thoughts

no code implementations14 Nov 2023 Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto

Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner.

A Trip Towards Fairness: Bias and De-Biasing in Large Language Models

no code implementations23 May 2023 Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto

In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable.

Fairness

PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models

no code implementations8 May 2023 Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto

Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples.

Memorization Relation

Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages

no code implementations3 May 2023 Elena Sofia Ruzzetti, Federico Ranaldi, Felicia Logozzo, Michele Mastromattei, Leonardo Ranaldi, Fabio Massimo Zanzotto

The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language.

Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.