Discovering and interpreting transcriptomic drivers of imaging traits using neural networks

11 Dec 2019  ·  Nova F. Smedley, Suzie El-Saden, William Hsu ·

Motivation. Cancer heterogeneity is observed at multiple biological levels. To improve our understanding of these differences and their relevance in medicine, approaches to link organ- and tissue-level information from diagnostic images and cellular-level information from genomics are needed. However, these "radiogenomic" studies often use linear, shallow models, depend on feature selection, or consider one gene at a time to map images to genes. Moreover, no study has systematically attempted to understand the molecular basis of imaging traits based on the interpretation of what the neural network has learned. These current studies are thus limited in their ability to understand the transcriptomic drivers of imaging traits, which could provide additional context for determining clinical traits, such as prognosis. Results. We present an approach based on neural networks that takes high-dimensional gene expressions as input and performs nonlinear mapping to an imaging trait. To interpret the models, we propose gene masking and gene saliency to extract learned relationships from radiogenomic neural networks. In glioblastoma patients, our models outperform comparable classifiers (>0.10 AUC) and our interpretation methods were validated using a similar model to identify known relationships between genes and molecular subtypes. We found that imaging traits had specific transcription patterns, e.g., edema and genes related to cellular invasion, and 15 radiogenomic associations were predictive of survival. We demonstrate that neural networks can model transcriptomic heterogeneity to reflect differences in imaging and can be used to derive radiogenomic associations with clinical value.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here