1 code implementation • insights (ACL) 2022 • Zhang Bingyu, Nikolay Arefyev
The results show that while RoBERTa has a clear advantage for larger training sets, the DV-ngrams-cosine performs better than RoBERTa when the labelled training set is very small (10 or 20 documents).
Ranked #7 on Sentiment Analysis on IMDb