no code implementations • 12 Feb 2024 • Federico Ranaldi, Elena Sofia Ruzzetti, Dario Onorati, Leonardo Ranaldi, Cristina Giannone, Andrea Favalli, Raniero Romagnoli, Fabio Massimo Zanzotto
Our results indicate a significant performance drop in GPT-3. 5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.
no code implementations • 14 Nov 2023 • Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto
Reasoning methods, best exemplified by the well-known Chain-of-Thought (CoT), empower the reasoning abilities of Large Language Models (LLMs) by eliciting them to solve complex tasks in a step-by-step manner.
no code implementations • 23 May 2023 • Leonardo Ranaldi, Elena Sofia Ruzzetti, Davide Venditti, Dario Onorati, Fabio Massimo Zanzotto
In this paper, we performed a large investigation of the bias of three families of CtB-LLMs, and we showed that debiasing techniques are effective and usable.
no code implementations • 8 May 2023 • Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto
Pre-trained Language Models such as BERT are impressive machines with the ability to memorize, possibly generalized learning examples.
no code implementations • 3 May 2023 • Elena Sofia Ruzzetti, Federico Ranaldi, Felicia Logozzo, Michele Mastromattei, Leonardo Ranaldi, Fabio Massimo Zanzotto
The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language.
no code implementations • 14 Jan 2022 • Leonardo Ranaldi, Aria Nourbakhsh, Arianna Patrizi, Elena Sofia Ruzzetti, Dario Onorati, Francesca Fallucchi, Fabio Massimo Zanzotto
Pre-trained Transformers are challenging human performances in many NLP tasks.
no code implementations • Findings (ACL) 2022 • Elena Sofia Ruzzetti, Leonardo Ranaldi, Michele Mastromattei, Francesca Fallucchi, Fabio Massimo Zanzotto
In this paper, we propose to use definitions retrieved in traditional dictionaries to produce word embeddings for rare words.