A Tool for Super-Resolving Multimodal Clinical MRI

3 Sep 2019  ·  Mikael Brudfors, Yael Balbastre, Parashkev Nachev, John Ashburner ·

We present a tool for resolution recovery in multimodal clinical magnetic resonance imaging (MRI). Such images exhibit great variability, both biological and instrumental. This variability makes automated processing with neuroimaging analysis software very challenging. This leaves intelligence extractable only from large-scale analyses of clinical data untapped, and impedes the introduction of automated predictive systems in clinical care. The tool presented in this paper enables such processing, via inference in a generative model of thick-sliced, multi-contrast MR scans. All model parameters are estimated from the observed data, without the need for manual tuning. The model-driven nature of the approach means that no type of training is needed for applicability to the diversity of MR contrasts present in a clinical context. We show on simulated data that the proposed approach outperforms conventional model-based techniques, and on a large hospital dataset of multimodal MRIs that the tool can successfully super-resolve very thick-sliced images. The implementation is available from https://github.com/brudfors/spm_superres.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here