Search Results for author: Uladzislau Yorsh

Found 3 papers, 1 papers with code

On Difficulties of Attention Factorization through Shared Memory

1 code implementation31 Mar 2024 Uladzislau Yorsh, Martin Holeňa, Ondřej Bojar, David Herel

Transformers have revolutionized deep learning in numerous fields, including natural language processing, computer vision, and audio processing.

Linear Self-Attention Approximation via Trainable Feedforward Kernel

no code implementations8 Nov 2022 Uladzislau Yorsh, Alexander Kovalenko

In pursuit of faster computation, Efficient Transformers demonstrate an impressive variety of approaches -- models attaining sub-quadratic attention complexity can utilize a notion of sparsity or a low-rank approximation of inputs to reduce the number of attended keys; other ways to reduce complexity include locality-sensitive hashing, key pooling, additional memory to store information in compacted or hybridization with other architectures, such as CNN.

SimpleTRON: Simple Transformer with O(N) Complexity

no code implementations23 Nov 2021 Uladzislau Yorsh, Alexander Kovalenko, Vojtěch Vančura, Daniel Vašata, Pavel Kordík, Tomáš Mikolov

In this paper, we propose that the dot product pairwise matching attention layer, which is widely used in Transformer-based models, is redundant for the model performance.

Text Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.