Channel Attention Module

Introduced by Woo et al. in CBAM: Convolutional Block Attention Module

A Channel Attention Module is a module for channel-based attention in convolutional neural networks. We produce a channel attention map by exploiting the inter-channel relationship of features. As each channel of a feature map is considered as a feature detector, channel attention focuses on ‘what’ is meaningful given an input image. To compute the channel attention efficiently, we squeeze the spatial dimension of the input feature map.

We first aggregate spatial information of a feature map by using both average-pooling and max-pooling operations, generating two different spatial context descriptors: $\mathbf{F}^{c}_{avg}$ and $\mathbf{F}^{c}_{max}$, which denote average-pooled features and max-pooled features respectively.

Both descriptors are then forwarded to a shared network to produce our channel attention map $\mathbf{M}_{c} \in \mathbb{R}^{C\times{1}\times{1}}$. Here $C$ is the number of channels. The shared network is composed of multi-layer perceptron (MLP) with one hidden layer. To reduce parameter overhead, the hidden activation size is set to $\mathbb{R}^{C/r×1×1}$, where $r$ is the reduction ratio. After the shared network is applied to each descriptor, we merge the output feature vectors using element-wise summation. In short, the channel attention is computed as:

$$ \mathbf{M_{c}}\left(\mathbf{F}\right) = \sigma\left(\text{MLP}\left(\text{AvgPool}\left(\mathbf{F}\right)\right)+\text{MLP}\left(\text{MaxPool}\left(\mathbf{F}\right)\right)\right) $$

$$ \mathbf{M_{c}}\left(\mathbf{F}\right) = \sigma\left(\mathbf{W_{1}}\left(\mathbf{W_{0}}\left(\mathbf{F}^{c}_{avg}\right)\right) +\mathbf{W_{1}}\left(\mathbf{W_{0}}\left(\mathbf{F}^{c}_{max}\right)\right)\right) $$

where $\sigma$ denotes the sigmoid function, $\mathbf{W}_{0} \in \mathbb{R}^{C/r\times{C}}$, and $\mathbf{W}_{1} \in \mathbb{R}^{C\times{C/r}}$. Note that the MLP weights, $\mathbf{W}_{0}$ and $\mathbf{W}_{1}$, are shared for both inputs and the ReLU activation function is followed by $\mathbf{W}_{0}$.

Note that the channel attention module with just average pooling is the same as the Squeeze-and-Excitation Module.

Source: CBAM: Convolutional Block Attention Module

Latest Papers

PAPER DATE
UFA-FUSE: A novel deep supervised and hybrid model for multi-focus image fusion
Yongsheng ZangDongming ZhouChangcheng WangRencan NieYanbu Guo
2021-01-12
Spectral Response Function Guided Deep Optimization-driven Network for Spectral Super-resolution
Jiang HeJie LiQiangqiang YuanHuanfeng ShenLiangpei Zhang
2020-11-19
Attention-Guided Network for Iris Presentation Attack Detection
Cunjian ChenArun Ross
2020-10-23
CC-Loss: Channel Correlation Loss For Image Classification
Zeyu SongDongliang ChangZhanyu MaXiaoxu LiZheng-Hua Tan
2020-10-12
CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation
| Ran GuGuotai WangTao SongRui HuangMichael AertsenJan DeprestSébastien OurselinTom VercauterenShaoting Zhang
2020-09-22
Dual Attention GANs for Semantic Image Synthesis
| Hao TangSong BaiNicu Sebe
2020-08-29
Region-based Non-local Operation for Video Classification
| Guoxi HuangAdrian G. Bors
2020-07-17
Wavelet Channel Attention Module with a Fusion Network for Single Image Deraining
Hao-Hsiang YangChao-Han Huck YangYu-Chiang Frank Wang
2020-07-17
Attention as Activation
| Yimian DaiStefan OehmckeFabian GiesekeYiquan WuKobus Barnard
2020-07-15
Correlation-Guided Attention for Corner Detection Based Visual Tracking
Fei Du Peng Liu Wei Zhao Xianglong Tang
2020-06-01
Attention-based network for low-light image enhancement
Cheng ZhangQingsen YanYu ZhuXianjun LiJinqiu SunYanning Zhang
2020-05-20
All you need is a second look: Towards Tighter Arbitrary shape text detection
Meng CaoYuexian Zou
2020-04-26
Context-Aware Domain Adaptation in Semantic Segmentation
Jinyu YangWeizhi AnChaochao YanPeilin ZhaoJunzhou Huang
2020-03-09
DR-GAN: Conditional Generative Adversarial Network for Fine-Grained Lesion Synthesis on Diabetic Retinopathy Images
Yi ZhouBoyang WangXiaodong HeShanshan CuiLing Shao
2019-12-10
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks
| Qilong WangBanggu WuPengfei ZhuPeihua LiWangMeng ZuoQinGhua Hu
2019-10-08
CBAM: Convolutional Block Attention Module
| Sanghyun WooJongchan ParkJoon-Young LeeIn So Kweon
2018-07-17
Residual Attention Network for Image Classification
| Fei WangMengqing JiangChen QianShuo YangCheng LiHonggang ZhangXiaogang WangXiaoou Tang
2017-04-23

Categories