no code implementations • EMNLP (NLLP) 2021 • Georgios Tziafas, Eugenie de Saint-Phalle, Wietse de Vries, Clara Egger, Tommaso Caselli
The COVID-19 pandemic has witnessed the implementations of exceptional measures by governments across the world to counteract its impact.
1 code implementation • ACL 2022 • Wietse de Vries, Martijn Wieling, Malvina Nissim
Cross-lingual transfer learning with large multilingual pre-trained models can be an effective approach for low-resource languages with no labeled training data.
2 code implementations • 22 May 2023 • Wietse de Vries, Martijn Wieling, Malvina Nissim
The benchmark includes a diverse set of datasets for low-, medium- and high-resource tasks.
1 code implementation • Findings (ACL) 2021 • Wietse de Vries, Martijn Bartelds, Malvina Nissim, Martijn Wieling
For many (minority) languages, the resources needed to train large models are not available.
1 code implementation • Findings (ACL) 2021 • Wietse de Vries, Malvina Nissim
Specifically, we describe the adaptation of English GPT-2 to Italian and Dutch by retraining lexical embeddings without tuning the Transformer layers.
1 code implementation • 25 Nov 2020 • Martijn Bartelds, Wietse de Vries, Faraz Sanal, Caitlin Richter, Mark Liberman, Martijn Wieling
We show that speech representations extracted from a specific type of neural model (i. e. Transformers) lead to a better match with human perception than two earlier approaches on the basis of phonetic transcriptions and MFCC-based acoustic features.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Wietse de Vries, Andreas van Cranenburgh, Malvina Nissim
Peeking into the inner workings of BERT has shown that its layers resemble the classical NLP pipeline, with progressively more complex tasks being concentrated in later layers.
2 code implementations • 19 Dec 2019 • Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, Malvina Nissim
The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks.
Ranked #3 on Sentiment Analysis on DBRD