1 code implementation • 8 Feb 2023 • Yanwen Fang, Yuxi Cai, Jintai Chen, Jingyu Zhao, Guangjian Tian, Guodong Li
Motivated by this, we devise a cross-layer attention mechanism, called multi-head recurrent layer attention (MRLA), that sends a query representation of the current layer to all previous layers to retrieve query-related information from different levels of receptive fields.
no code implementations • 17 Sep 2022 • Yuxi Cai, Huicheng Lai, Zhenghong Jia
In addition, the exchange of information between attention modules is even less visible to researchers.
no code implementations • 27 May 2022 • Yuxi Cai, Huicheng Lai
Image super-resolution reconstruction achieves better results than traditional methods with the help of the powerful nonlinear representation ability of convolution neural network.