Vietnamese Natural Language Understanding
1 papers with code • 0 benchmarks • 0 datasets
This task has no description! Would you like to contribute one?
Benchmarks
These leaderboards are used to track progress in Vietnamese Natural Language Understanding
No evaluation results yet. Help compare methods by
submitting
evaluation metrics.
Most implemented papers
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
We release the VLUE benchmark to promote research on building vision-language models that generalize well to more diverse images and concepts unseen during pre-training, and are practical in terms of efficiency-performance trade-off.