2 code implementations • 19 May 2023 • Ling Zheng, Jinchen Zhu, Jinpeng Shi, Shizhuang Weng
Specifically, we propose the Mixed Transformer Block (MTB), consisting of multiple consecutive transformer layers, in some of which the Pixel Mixer (PM) is used to replace the Self-Attention (SA).
1 code implementation • 24 Jan 2023 • Jinpeng Shi, Hui Li, Tianle Liu, Yulong Liu, Mingjian Zhang, Jinchen Zhu, Ling Zheng, Shizhuang Weng
However, the challenge of balancing model performance and complexity has hindered their application in lightweight SR (LSR).