1 code implementation • 13 Feb 2022 • Nannan Li, Yaran Chen, Weifan Li, Zixiang Ding, Dongbin Zhao
In this paper, we propose the broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer, which is called BViT.