Attention Mechanisms

Scaled Dot-Product Attention

Introduced by Vaswani et al. in Attention Is All You Need

Scaled dot-product attention is an attention mechanism where the dot products are scaled down by $\sqrt{d_k}$. Formally we have a query $Q$, a key $K$ and a value $V$ and calculate the attention as:

$$ {\text{Attention}}(Q, K, V) = \text{softmax}\left(\frac{QK^{T}}{\sqrt{d_k}}\right)V $$

If we assume that $q$ and $k$ are $d_k$-dimensional vectors whose components are independent random variables with mean $0$ and variance $1$, then their dot product, $q \cdot k = \sum_{i=1}^{d_k} u_iv_i$, has mean $0$ and variance $d_k$. Since we would prefer these values to have variance $1$, we divide by $\sqrt{d_k}$.

Source: Attention Is All You Need

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 63 8.03%
Retrieval 36 4.59%
Large Language Model 30 3.82%
Question Answering 29 3.69%
In-Context Learning 23 2.93%
Sentence 21 2.68%
Machine Translation 15 1.91%
Translation 14 1.78%
Code Generation 13 1.66%

Components


Component Type
Softmax
Output Functions

Categories