1 code implementation • 31 Mar 2024 • Uladzislau Yorsh, Martin Holeňa, Ondřej Bojar, David Herel
Transformers have revolutionized deep learning in numerous fields, including natural language processing, computer vision, and audio processing.
no code implementations • 8 Nov 2022 • Uladzislau Yorsh, Alexander Kovalenko
In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches -- models attaining sub-quadratic attention complexity can utilize a notion of sparsity or a low-rank approximation of inputs to reduce the number of attended keys; other ways to reduce complexity include locality-sensitive hashing, key pooling, additional memory to store information in compacted or hybridization with other architectures, such as CNN.
no code implementations • 23 Nov 2021 • Uladzislau Yorsh, Alexander Kovalenko, Vojtěch Vančura, Daniel Vašata, Pavel Kordík, Tomáš Mikolov
In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in Transformer-based models, is redundant for the model performance.