no code implementations • 24 May 2023 • Wei Zhou, Qian Wang, Weiwei Jin, Xinzhe Shi, Ying He
Local Transformer uses a dynamic graph to calculate all neighboring point weights by intra-domain cross-attention with dynamically updated graph relations, so that every neighboring point could affect the features of centroid with different weights; Global Transformer enlarges the receptive field of Local Transformer by a global self-attention.
no code implementations • 10 May 2023 • Wei Zhou, Weiwei Jin, Qian Wang, Yifan Wang, Dekui Wang, Xingxing Hao, Yongxiang Yu
Recently, Transformer-based methods for point cloud learning have achieved good results on various point cloud learning benchmarks.