Attentive Normalization generalizes the common affine transformation component in the vanilla feature normalization. Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features in an instance-specific way. The weights are learned by leveraging feature attention.
Source: Attentive NormalizationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Style Transfer | 1 | 9.09% |
Video Style Transfer | 1 | 9.09% |
Conditional Image Generation | 1 | 9.09% |
Image Generation | 1 | 9.09% |
Semantic correspondence | 1 | 9.09% |
Semantic Similarity | 1 | 9.09% |
Semantic Textual Similarity | 1 | 9.09% |
Image Classification | 1 | 9.09% |
Instance Segmentation | 1 | 9.09% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |