Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs

30 Jan 2024  ·  Stepan Tytarenko, Mohammad Ruhul Amin ·

Fine-tuning large pre-trained language models (LLMs) on particular datasets is a commonly employed strategy in Natural Language Processing (NLP) classification tasks. However, this approach usually results in a loss of models generalizability. In this paper, we present a framework that allows for maintaining generalizability, and enhances the performance on the downstream task by utilizing task-specific context attribution. We show that a linear transformation of the text representation from any transformer model using the task-specific concept operator results in a projection onto the latent concept space, referred to as context attribution in this paper. The specific concept operator is optimized during the supervised learning stage via novel loss functions. The proposed framework demonstrates that context attribution of the text representation for each task objective can improve the capacity of the discriminator function and thus achieve better performance for the classification task. Experimental results on three datasets, namely HateXplain, IMDB reviews, and Social Media Attributions, illustrate that the proposed model attains superior accuracy and generalizability. Specifically, for the non-fine-tuned BERT on the HateXplain dataset, we observe 8% improvement in accuracy and 10% improvement in F1-score. Whereas for the IMDB dataset, fine-tuned state-of-the-art XLNet is outperformed by 1% for both accuracy and F1-score. Furthermore, in an out-of-domain cross-dataset test, DistilBERT fine-tuned on the IMDB dataset in conjunction with the proposed model improves the F1-score on the HateXplain dataset by 7%. For the Social Media Attributions dataset of YouTube comments, we observe 5.2% increase in F1-metric. The proposed framework is implemented with PyTorch and provided open-source on GitHub.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Zero-Shot Text Classification HateXplain Space-DistilBERT F1 Macro 0.5187 # 1
Text Classification HateXplain BERT-base Accuracy (2 classes) 0.6588 # 4
F1 Macro 0.6555 # 4
Text Classification HateXplain Space-BERT Accuracy (2 classes) 0.8110 # 3
F1 Macro 0.8108 # 3
Text Classification HateXplain Space-XLNet Accuracy (2 classes) 0.8798 # 1
F1 Macro 0.8797 # 1
Text Classification HateXplain XLNet Accuracy (2 classes) 0.8160 # 2
F1 Macro 0.8156 # 2
Zero-Shot Text Classification HateXplain DistilBERT F1 Macro 0.4450 # 2
Sentiment Analysis IMDb Space-XLNet Accuracy 94.88 # 19
Text Classification IMDb Movie Reviews Space-XLNet F1 Macro 0.9487 # 1
Sentiment Analysis IMDb Movie Reviews Space-XLNet Accuracy (2 classes) 0.9488 # 1
F1 Macro 0.9487 # 1
Sentiment Analysis IMDb Movie Reviews Space-DistilBERT Accuracy (2 classes) 0.8322 # 2
F1 Macro 0.8320 # 2
Text Classification IMDb Movie Reviews XLNet Accuracy (2 classes) 0.9387 # 1
Text Classification Social media attributions of YouTube comments Space-BERT Accuracy (2 classes) 0.8309 # 1
F1 Macro 0.8006 # 1
Text Classification Social media attributions of YouTube comments BERT-base Accuracy (2 classes) 0.8220 # 2
F1 Macro 0.7484 # 2

Methods