Vietnamese Language Models

3 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

PhoBERT: Pre-trained language models for Vietnamese

VinAIResearch/PhoBERT Findings of the Association for Computational Linguistics 2020

We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese.

VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models

michaelzhouwang/vlue 30 May 2022

We release the VLUE benchmark to promote research on building vision-language models that generalize well to more diverse images and concepts unseen during pre-training, and are practical in terms of efficiency-performance trade-off.

ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing

uitnlp/visobert 17 Oct 2023

English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks.