When will the mist clear? On the Interpretability of Machine Learning for Medical Applications: a survey

Artificial Intelligence is providing astonishing results, with medicine being one of its favourite playgrounds. In a few decades, computers may be capable of formulating diagnoses and choosing the correct treatment, while robots may perform surgical operations, and conversational agents could interact with patients as virtual coaches. Machine Learning and, in particular, Deep Neural Networks are behind this revolution. In this scenario, important decisions will be controlled by standalone machines that have learned predictive models from provided data. Among the most challenging targets of interest in medicine are cancer diagnosis and therapies but, to start this revolution, software tools need to be adapted to cover the new requirements. In this sense, learning tools are becoming a commodity in Python and Matlab libraries, just to name two, but to exploit all their possibilities, it is essential to fully understand how models are interpreted and which models are more interpretable than others. In this survey, we analyse current machine learning models, frameworks, databases and other related tools as applied to medicine - specifically, to cancer research - and we discuss their interpretability, performance and the necessary input data. From the evidence available, ANN, LR and SVM have been observed to be the preferred models. Besides, CNNs, supported by the rapid development of GPUs and tensor-oriented programming libraries, are gaining in importance. However, the interpretability of results by doctors is rarely considered which is a factor that needs to be improved. We therefore consider this study to be a timely contribution to the issue.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods