Can we trust deep learning models diagnosis? The impact of domain shift in chest radiograph classification

3 Sep 2019  ·  Eduardo H. P. Pooch, Pedro L. Ballester, Rodrigo C. Barros ·

While deep learning models become more widespread, their ability to handle unseen data and generalize for any scenario is yet to be challenged. In medical imaging, there is a high heterogeneity of distributions among images based on the equipment that generates them and their parametrization. This heterogeneity triggers a common issue in machine learning called domain shift, which represents the difference between the training data distribution and the distribution of where a model is employed. A high domain shift tends to implicate in a poor generalization performance from the models. In this work, we evaluate the extent of domain shift on four of the largest datasets of chest radiographs. We show how training and testing with different datasets (e.g., training in ChestX-ray14 and testing in CheXpert) drastically affects model performance, posing a big question over the reliability of deep learning models trained on public datasets. We also show that models trained on CheXpert and MIMIC-CXR generalize better to other datasets.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here