AFINets: Attentive Feature Integration Networks for Image Classification

1 Jan 2021  ·  Xinglin Pan, Jing Xu, Yu Pan, WenXiang Lin, Liangjian Wen, Zenglin Xu ·

Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks, e.g., image classification. Recent advances in CNNs, such as ResNets and DenseNets, mainly focus on the skip and concatenation operators to avoid gradient vanishing. However, such operators largely neglect information across layers (as in ResNets) or involve tremendous redundancy of features repeatedly copied from previous layers (as in DenseNets). In this paper, we design Attentive Feature Integration (AFI) modules, which can be applicable to most recent network architectures, leading to new architectures named as AFINets. AFINets can by adaptively integrate distinct information through explicitly modeling the subordinate relationship between different levels of features. Experimental results on benchmark datasets have demonstrated the effectiveness of the proposed AFI modules.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here