Search Results for author: Sayed Mohammad Vakilzadeh Hatefi

Found 1 papers, 1 papers with code

AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers

1 code implementation8 Feb 2024 Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, Wojciech Samek

Large Language Models are prone to biased predictions and hallucinations, underlining the paramount importance of understanding their model-internal reasoning process.

Attribute Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.