no code implementations • CVPR 2021 • Dongsheng Ruan, Daiyin Wang, Yuan Zheng, Nenggan Zheng, Min Zheng
These approaches commonly learn the relationship between global contexts and attention activations by using fully-connected layers or linear transformations.
no code implementations • 6 Sep 2019 • Dongsheng Ruan, Jun Wen, Nenggan Zheng, Min Zheng
In this work, we first revisit the SE block, and then present a detailed empirical study of the relationship between global context and attention distribution, based on which we propose a simple yet effective module, called Linear Context Transform (LCT) block.