Vision Transformers

Co-Scale Conv-attentional Image Transformer

Introduced by Xu et al. in Co-Scale Conv-Attentional Image Transformers

Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformers' encoder branches at individual scales, while allowing representations learned at different scales to effectively communicate with each other. Second, the conv-attentional mechanism is designed by realizing a relative position embedding formulation in the factorized attention module with an efficient convolution-like implementation. CoaT empowers image Transformers with enriched multi-scale and contextual modeling capabilities.

Source: Co-Scale Conv-Attentional Image Transformers

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 1 25.00%
Instance Segmentation 1 25.00%
Object Detection 1 25.00%
Semantic Segmentation 1 25.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories