Search Results for author: Omri Raccah

Found 1 papers, 0 papers with code

Memory in humans and deep language models: Linking hypotheses for model augmentation

no code implementations4 Oct 2022 Omri Raccah, Phoebe Chen, Ted L. Willke, David Poeppel, Vy A. Vo

The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations.

Cannot find the paper you are looking for? You can Submit a new open access paper.