HyMNet: a Multimodal Deep Learning System for Hypertension Classification using Fundus Photographs and Cardiometabolic Risk Factors

In recent years, deep learning has shown promise in predicting hypertension (HTN) from fundus images. However, most prior research has primarily focused on analyzing a single type of data, which may not capture the full complexity of HTN risk. To address this limitation, this study introduces a multimodal deep learning (MMDL) system, dubbed HyMNet, which combines fundus images and cardiometabolic risk factors, specifically age and gender, to improve hypertension detection capabilities. Our MMDL system uses RETFound, a foundation model pre-trained on 1.6 million retinal images, for the fundus path and a fully connected neural network for the age and gender path. The two paths are jointly trained by concatenating the feature vectors from each path that are then fed into a fusion network. The system was trained on 5,016 retinal images from 1,243 individuals collected from the Saudi Ministry of National Guard Health Affairs. The results show that the multimodal model that integrates fundus images along with age and gender outperforms the unimodal system trained solely on fundus photographs, with an F1 score of 0.771 [0.747, 0.796], and 0.745 [0.719, 0.772] for hypertension detection, respectively. Additionally, we studied the effect underlying diabetes mellitus has on the model's predictive ability, concluding that diabetes is used as a confounding variable for distinguishing hypertensive cases. Our code and model weights are publicly available at https://github.com/MohammedSB/HyMNet.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here