Task Transformer Network for Joint MRI Reconstruction and Super-Resolution

12 Jun 2021  ·  Chun-Mei Feng, Yunlu Yan, Huazhu Fu, Li Chen, Yong Xu ·

The core problem of Magnetic Resonance Imaging (MRI) is the trade off between acceleration and image quality. Image reconstruction and super-resolution are two crucial techniques in Magnetic Resonance Imaging (MRI). Current methods are designed to perform these tasks separately, ignoring the correlations between them. In this work, we propose an end-to-end task transformer network (T$^2$Net) for joint MRI reconstruction and super-resolution, which allows representations and feature transmission to be shared between multiple task to achieve higher-quality, super-resolved and motion-artifacts-free images from highly undersampled and degenerated MRI data. Our framework combines both reconstruction and super-resolution, divided into two sub-branches, whose features are expressed as queries and keys. Specifically, we encourage joint feature learning between the two tasks, thereby transferring accurate task information. We first use two separate CNN branches to extract task-specific features. Then, a task transformer module is designed to embed and synthesize the relevance between the two tasks. Experimental results show that our multi-task model significantly outperforms advanced sequential methods, both quantitatively and qualitatively.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution IXI T2Net SSIM for 2x T2w 0.8720 # 9
PSNR 2x T2w 29.38 # 9
SSIM 4x T2w 0.8500 # 9
PSNR 4x T2w 28.66 # 9

Methods


No methods listed for this paper. Add relevant methods here