no code implementations • 18 Oct 2023 • Duc-Vu Nguyen, Quoc-Nam Nguyen
Our evaluation of six well-known LLMs, namely BLOOMZ-7. 1B-MT, LLaMA-2-7B, LLaMA-2-70B, GPT-3, GPT-3. 5, and GPT-4. 0, on the ViMMRC 1. 0 and ViMMRC 2. 0 benchmarks and our proposed dataset shows promising results on the MCSB ability of LLMs for Vietnamese.
1 code implementation • 17 Oct 2023 • Quoc-Nam Nguyen, Thang Chau Phan, Duc-Vu Nguyen, Kiet Van Nguyen
English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks.
Vietnamese Language Models Vietnamese Social Media Text Processing +1
1 code implementation • 6 Sep 2023 • Chau-Thang Phan, Quoc-Nam Nguyen, Chi-Thanh Dang, Trong-Hop Do, Kiet Van Nguyen
Our proposed ViCGCN approach demonstrates a significant improvement of up to 6. 21%, 4. 61%, and 2. 63% over the best Contextualized Language Models, including multilingual and monolingual, on three benchmark datasets, UIT-VSMEC, UIT-ViCTSD, and UIT-VSFC, respectively.
1 code implementation • 31 Aug 2023 • Chau-Thang Phan, Quoc-Nam Nguyen, Kiet Van Nguyen
Drawing inspiration from recent advancements in natural language processing and understanding, we cast link prediction as an NLI task, wherein the presence of a link between two articles is treated as a premise, and the task is to determine whether this premise holds based on the information presented in the articles.