Channel-wise Soft Attention is an attention mechanism in computer vision that assigns "soft" attention weights for each channel $c$. In soft channel-wise attention, the alignment weights are learned and placed "softly" over each channel. This would contrast with hard attention which would only selects one channel to attend to at a time.
Image: Xu et al
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Object Detection | 7 | 10.94% |
Semantic Segmentation | 7 | 10.94% |
Image Classification | 6 | 9.38% |
Instance Segmentation | 4 | 6.25% |
Point Cloud Completion | 2 | 3.13% |
Image Segmentation | 1 | 1.56% |
Fake News Detection | 1 | 1.56% |
3D Classification | 1 | 1.56% |
Classification | 1 | 1.56% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |