Token Merging: Your ViT But Faster

We introduce Token Merging (ToMe), a simple method to increase the throughput of existing ViT models without needing to train. ToMe gradually combines similar tokens in a transformer using a general and light-weight matching algorithm that is as fast as pruning while being more accurate. Off-the-shelf, ToMe can 2x the throughput of state-of-the-art ViT-L @ 512 and ViT-H @ 518 models on images and 2.2x the throughput of ViT-L on video with only a 0.2-0.3% accuracy drop in each case. ToMe can also easily be applied during training, improving in practice training speed up to 2x for MAE fine-tuning on video. Training with ToMe further minimizes accuracy drop, leading to 2x the throughput of ViT-B on audio for only a 0.4% mAP drop. Qualitatively, we find that ToMe merges object parts into one token, even over multiple frames of video. Overall, ToMe's accuracy and speed are competitive with state-of-the-art on images, video, and audio.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Efficient ViTs ImageNet-1K (with DeiT-S) ToMe ($r=8$) Top 1 Accuracy 79.7 # 13
GFLOPs 3.4 # 33
Efficient ViTs ImageNet-1K (with DeiT-S) ToMe ($r=16$) Top 1 Accuracy 79.1 # 29
GFLOPs 2.3 # 5
Efficient ViTs ImageNet-1K (with DeiT-S) ToMe ($r=13$) Top 1 Accuracy 79.4 # 22
GFLOPs 2.7 # 17
Efficient ViTs ImageNet-1K (with DeiT-T) ToMe ($r=16$) Top 1 Accuracy 70.7 # 18
GFLOPs 0.6 # 1
Efficient ViTs ImageNet-1K (with DeiT-T) ToMe ($r=12$) Top 1 Accuracy 71.4 # 16
GFLOPs 0.8 # 8
Efficient ViTs ImageNet-1K (with DeiT-T) ToMe ($r=8$) Top 1 Accuracy 71.7 # 14
GFLOPs 0.9 # 15

Methods