Subword Segmentation

WordPiece is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:

  1. Initialize the word unit inventory with all the characters in the text.
  2. Build a language model on the training data using the inventory from 1.
  3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.
  4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.

Text: Source

Image: WordPiece as used in BERT

Source: Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 110 12.42%
Retrieval 91 10.27%
Question Answering 49 5.53%
Sentence 37 4.18%
Large Language Model 35 3.95%
Text Classification 33 3.72%
Sentiment Analysis 30 3.39%
NER 19 2.14%
Text Generation 18 2.03%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories