no code implementations • LREC 2022 • Cedric Lothritz, Bertrand Lebichot, Kevin Allix, Lisa Veiber, Tegawende Bissyande, Jacques Klein, Andrey Boytsov, Clément Lefebvre, Anne Goujon
Pre-trained Language Models such as BERT have become ubiquitous in NLP where they have achieved state-of-the-art performance in most NLP tasks.
no code implementations • COLING 2020 • Cedric Lothritz, Kevin Allix, Lisa Veiber, Tegawend{\'e} F. Bissyand{\'e}, Jacques Klein
In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF).