FlauBERT: Unsupervised Language Model Pre-training for French

LREC 2020 Hang LeLoïc VialJibril FrejVincent SegonneMaximin CoavouxBenjamin LecouteuxAlexandre AllauzenBenoît CrabbéLaurent BesacierDidier Schwab

Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet