Search Results for author: Stepan Tytarenko

Found 1 papers, 1 papers with code

Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs

1 code implementation30 Jan 2024 Stepan Tytarenko, Mohammad Ruhul Amin

We show that a linear transformation of the text representation from any transformer model using the task-specific concept operator results in a projection onto the latent concept space, referred to as context attribution in this paper.

Sentiment Analysis Sentiment Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.