no code implementations • EMNLP (BlackboxNLP) 2020 • Marius Mosbach, Anna Khokhlova, Michael A. Hedderich, Dietrich Klakow
Our analysis reveals that while fine-tuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method.