1 code implementation • 2 Oct 2022 • Yubo Cui, Jiayao Shan, Zuoxu Gu, Zhiheng Li, Zheng Fang
Meanwhile, the encoder applies the attention on multi-scale features to compensate for the lack of information caused by the sparsity of point cloud and the single scale of features.
1 code implementation • 28 Oct 2021 • Yubo Cui, Zheng Fang, Jiayao Shan, Zuoxu Gu, Sifan Zhou
By using cross-attention, the transformer decoder fuses features and includes more target cues into the current point cloud feature to compute the region attentions, which makes the similarity computing more efficient.