Skip-gram Word2Vec is an architecture for computing word embeddings. Instead of using surrounding words to predict the center word, as with CBow Word2Vec, Skip-gram Word2Vec uses the central word to predict the surrounding words.
The skip-gram objective function sums the log probabilities of the surrounding $n$ words to the left and right of the target word $w_{t}$ to produce the following objective:
$$J_\theta = \frac{1}{T}\sum^{T}_{t=1}\sum_{-n\leq{j}\leq{n}, \neq{0}}\log{p}\left(w_{j+1}\mid{w_{t}}\right)$$
Source:TASK | PAPERS | SHARE |
---|---|---|
Feature Engineering | 1 | 10.00% |
Clinical Assertion Status Detection | 1 | 10.00% |
Clinical Concept Extraction | 1 | 10.00% |
Named Entity Recognition | 1 | 10.00% |
Drug Discovery | 1 | 10.00% |
Language Modelling | 1 | 10.00% |
Cross-Lingual Natural Language Inference | 1 | 10.00% |
Cross-Lingual Transfer | 1 | 10.00% |
Natural Language Inference | 1 | 10.00% |
COMPONENT | TYPE |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |