1 code implementation • CVPR 2023 • Tongkun Guan, Chaochen Gu, Jingzheng Tu, Xue Yang, Qi Feng, Yudi Zhao, Xiaokang Yang, Wei Shen
Supervised attention can alleviate the above issue, but it is character category-specific, which requires extra laborious character-level bounding box annotations and would be memory-intensive when handling languages with larger character categories.
Ranked #2 on Scene Text Recognition on ICDAR 2003
1 code implementation • 25 Oct 2021 • Tongkun Guan, Chaochen Gu, Changsheng Lu, Jingzheng Tu, Qi Feng, Kaijie Wu, Xinping Guan
Then, an attentive refinement network is developed by the attention map to rectify the location deviation of candidate boxes.
no code implementations • 13 Sep 2021 • Jingzheng Tu, Qimin Xu, Cailian Chen
It postpones the completion time and degrades the accuracy of vision detection tasks.
no code implementations • 24 Dec 2019 • Jingzheng Tu, Guoxian Yu, Jun Wang, Carlotta Domeniconi, Xiangliang Zhang
However, they all assume that workers' label quality is stable over time (always at the same level whenever they conduct the tasks).