1 code implementation • ACL (WOAH) 2021 • Vanessa Hahn, Dana Ruiter, Thomas Kleinbauer, Dietrich Klakow
We observe that, on both similar and distant target tasks and across all languages, the subspace-based representations transfer more effectively than standard BERT representations in the zero-shot setting, with improvements between F1 +10. 9 and F1 +42. 9 over the baselines across all tested monolingual and cross-lingual scenarios.