no code implementations • 30 Apr 2024 • Justin Engelmann, Miguel O. Bernabeu
We propose a novel Token Reconstruction objective that we use to train RETFound-Green, a retinal foundation model trained using only 75, 000 publicly available images and 400 times less compute.
no code implementations • 11 Mar 2024 • Justin Engelmann, Diana Moukaddem, Lucas Gago, Niall Strang, Miguel O. Bernabeu
In GRAPE, Pearson/Spearman correlation (first and next visit) was 0. 7479/0. 7474 for DART, and 0. 7109/0. 7208 for AutoMorph (all p<0. 0001).
1 code implementation • 5 Dec 2023 • Justin Engelmann, Jamie Burke, Charlene Hamid, Megan Reid-Schachter, Dan Pugh, Neeraj Dhaun, Diana Moukaddem, Lyle Gray, Niall Strang, Paul McGraw, Amos Storkey, Paul J. Steptoe, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Ian J. C. MacCormick
We analysed segmentation agreement (AUC, Dice) and choroid metrics agreement (Pearson, Spearman, mean absolute error (MAE)) in internal and external test sets.
1 code implementation • 25 Jul 2023 • Justin Engelmann, Amos Storkey, Miguel O. Bernabeu
For this task, we present a second model, QuickQual MEga Minified Estimator (QuickQual-MEME), that consists of only 10 parameters on top of an off-the-shelf Densenet121 and can distinguish between gradable and ungradable images with an accuracy of 89. 18% (AUC: 0. 9537).
1 code implementation • 3 Jul 2023 • Jamie Burke, Justin Engelmann, Charlene Hamid, Megan Reid-Schachter, Tom Pearson, Dan Pugh, Neeraj Dhaun, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Amos Storkey, Ian J. C. MacCormick
Results: DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0. 9994, Dice=0. 9664; Pearson correlation of 0. 8908 for choroidal thickness and 0. 9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34. 49s ($\pm$15. 09) using GPET to 1. 25s ($\pm$0. 10) using DeepGPET.
no code implementations • 12 Jul 2022 • Justin Engelmann, Ana Villaplana-Velasco, Amos Storkey, Miguel O. Bernabeu
Thus, methods for calculating retinal traits tend to be complex, multi-step pipelines that can only be applied to high quality images.
1 code implementation • 11 Mar 2022 • Justin Engelmann, Alice D. McTrusty, Ian J. C. MacCormick, Emma Pead, Amos Storkey, Miguel O. Bernabeu
Previous studies showed that deep learning (DL) models are effective for detecting retinal disease in UWF images, but primarily considered individual diseases under less-than-realistic conditions (excluding images with other diseases, artefacts, comorbidities, or borderline cases; and balancing healthy and diseased images) and did not systematically investigate which regions of the UWF images are relevant for disease detection.
no code implementations • 17 Dec 2021 • Justin Engelmann, Amos Storkey, Miguel O. Bernabeu
We propose the pixel-wise aggregation of image-wise explanations as a simple method to obtain label-wise and overall global explanations.
1 code implementation • 20 Aug 2020 • Justin Engelmann, Stefan Lessmann
Class imbalance is a common problem in supervised learning and impedes the predictive performance of classification models.