no code implementations • ACL (GEM) 2021 • Lorenzo De Mattei, Huiyuan Lai, Felice Dell’Orletta, Malvina Nissim
We ask subjects whether they perceive as human-produced a bunch of texts, some of which are actually human-written, while others are automatically generated.
1 code implementation • ACL (EvalNLGEval, INLG) 2020 • Lorenzo De Mattei, Michele Cafagna, Huiyuan Lai, Felice Dell'Orletta, Malvina Nissim, Albert Gatt
An ongoing debate in the NLG community concerns the best way to evaluate systems, with human evaluation often being considered the most reliable method, compared to corpus-based metrics.
no code implementations • LREC 2020 • Rob van der Goot, Alan Ramponi, Tommaso Caselli, Michele Cafagna, Lorenzo De Mattei
However, for Italian, there is no benchmark available for lexical normalization, despite the presence of many benchmarks for other tasks involving social media data.
no code implementations • LREC 2020 • Lorenzo De Mattei, Michele Cafagna, Felice Dell{'}Orletta, Malvina Nissim
We automatically generate headlines that are expected to comply with the specific styles of two different Italian newspapers.
1 code implementation • 29 Apr 2020 • Lorenzo De Mattei, Michele Cafagna, Felice Dell'Orletta, Malvina Nissim, Marco Guerini
We provide a thorough analysis of GePpeTto's quality by means of both an automatic and a human-based evaluation.
no code implementations • EMNLP 2018 • Dominique Brunato, Lorenzo De Mattei, Felice Dell{'}Orletta, Benedetta Iavarone, Giulia Venturi
In this paper, we present a crowdsourcing-based approach to model the human perception of sentence complexity.