Paper

Attention for Image Registration (AiR): an unsupervised Transformer approach

Image registration is a crucial task in signal processing, but it often encounters issues with stability and efficiency. Non-learning registration approaches rely on optimizing similarity metrics between fixed and moving images, which can be expensive in terms of time and space complexity. This problem can be exacerbated when the images are large or there are significant deformations between them. Recently, deep learning, specifically convolutional neural network (CNN)-based methods, have been explored as an effective solution to the weaknesses of non-learning approaches. To further advance learning approaches in image registration, we introduce an attention mechanism in the deformable image registration problem. Our proposed approach is based on a Transformer framework called AiR, which can be efficiently trained on GPGPU devices. We treat the image registration problem as a language translation task and use the Transformer to learn the deformation field. The method learns an unsupervised generated deformation map and is tested on two benchmark datasets. In summary, our approach shows promising effectiveness in addressing stability and efficiency issues in image registration tasks. The source code of AiR is available on Github.

Results in Papers With Code
(↓ scroll down to see all results)