bims-aukdir Biomed News
on Automated knowledge discovery in diabetes research
Issue of 2025–07–06
eleven papers selected by
Mott Given



  1. Transl Vis Sci Technol. 2025 Jul 01. 14(7): 1
       Purpose: To investigate the fairness of existing deep models for diabetic retinopathy (DR) detection and introduce an equitable model to reduce group performance disparities.
    Methods: We evaluated the performance and fairness of various deep learning models for DR detection using fundus images and optical coherence tomography (OCT) B-scans. A Fair Adaptive Scaling (FAS) module was developed to reduce group disparities. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC), and equity across various groups was assessed by equity-scaled AUC, which accommodated both overall AUC and AUCs of individual groups.
    Results: Using color fundus images, the integration of FAS with EfficientNet improved the overall AUC and equity-scaled AUC from 0.88 and 0.83 to 0.90 and 0.84 (P < 0.05) by race. AUCs for Asians and Whites increased by 0.05 and 0.03, respectively (P < 0.01). For gender, both metrics improved by 0.01 (P < 0.05). Using DenseNet121 on OCT B-Scans by race, FAS improved the overall AUC and equity-scaled AUC from 0.875 and 0.81 to 0.884 and 0.82, with gains of 0.03 and 0.02 for Asians and Blacks (P < 0.01). For gender, DenseNet121's metrics rose by 0.04 and 0.03, with gains of 0.05 and 0.04 for females and males (P < 0.01).
    Conclusions: Deep learning models demonstrate varying accuracies across different groups in DR detection. FAS improves equity and accuracy of deep learning models.
    Translational Relevance: The proposed deep learning model has a potential to improve both model performance and equity of DR detection.
    DOI:  https://doi.org/10.1167/tvst.14.7.1
  2. BMC Ophthalmol. 2025 Jul 01. 25(1): 352
       BACKGROUND: Diabetic macular edema (DME) is a leading cause of vision loss in diabetes, with variable responses to anti-vascular endothelial growth factor (anti-VEGF) therapy in DME patients. Current diagnosis relies on optical coherence tomography (OCT) imaging, but manual interpretation is limited. This study aims to integrate 3D-OCT features and clinical variables to develop machine learning (ML) models for predicting anti-VEGF treatment outcomes.
    METHODS AND ANALYSIS: Medical records and 3D-OCT images of DME patients were included in this study. The 3D-OCT images were categorized into good and poor visual response groups based on the best corrected visual acuity at one month after three consecutive anti-VEGF treatments. The images and clinical features were subjected to assessment by 11 automatic classification models for anti-VEGF treatment responses in DME patients. The top 3 performing models were selected to build an ensemble model, and evaluated in the test dataset.
    RESULTS: This study included 142 patients with 3D-OCT images of 170 eyes. A total of 20 image and clinical features were selected for the model construction and test in DME patients responded to anti-VEGF therapy. Adaptive boosting (AdaBoost), GradientBoosting, and light gradient boosting machine (LightGBM) exhibited better performances than the remaining 8 models. The ensemble model constructed achieved a sensitivity of 0.941, specificity of 0.882, and accuracy of 0.912 in the test dataset, with an area under the receiver operating characteristic curve of 0.976.
    CONCLUSION: This study established an ensemble ML algorithm based on 3D-OCT images and clinical features for automatic detection of treatment responses to anti-VEGF treatment in DME patients to predict the efficacy of anti-VEGF treatment in DME patients and assist clinicians in optimal treatment decisions.
    Keywords:  Anti-VEGF treatment; Diabetic macular edema; Machine learning; Optical coherence tomography.
    DOI:  https://doi.org/10.1186/s12886-025-04181-x
  3. Front Med (Lausanne). 2025 ;12 1591832
       Background: Diabetic retinopathy (DR) screening faces critical challenges in early detection due to its asymptomatic onset and the limitations of conventional prediction models. While existing studies predominantly focus on image-based AI diagnosis, there is a pressing need for accurate risk prediction using structured clinical data. The purpose of this study was to develop, compare, and validate models for predicting retinopathy in diabetic patients via five traditional statistical models and deep learning models.
    Methods: On the basis of 3,000 data points from the Diabetes Complications Data Set of the National Center for Population Health Sciences Data, the differences in the characteristics of patients with diabetes mellitus and diabetes combined with retinopathy were statistically analyzed using SPSS software. Five traditional machine learning models and a model based on deep neural networks (DNNs) were used to train models to assess retinopathy in diabetic patients.
    Results: Deep learning-based prediction models outperformed traditional machine learning models, namely logistic regression, decision tree, naive Bayes, random forest, and support vector machine, on all the datasets and performed better in predicting retinopathy in diabetic patients (accuracy, 0.778 vs. 0.753, 0.630, 0.718, 0.758, 0.776, respectively; F1 score, 0.776 vs. 0.751, 0.602, 0.724, 0.755, 0.776, respectively; AUC, 0.833 vs. 0.822, 0.631, 0.769, 0.829, 0.831, respectively). To enhance the interpretability of the deep learning model, SHAP analysis was employed to assess feature importance and provide insights into the key drivers of retinopathy prediction.
    Conclusion: Deep learning models can accurately predict retinopathy in diabetic patients. The findings of this study can be used for prevention and monitoring by allocating resources to high-risk patients.
    Keywords:  deep learning model; diabetic retinopathy; machine learning; model comparison; prediction models
    DOI:  https://doi.org/10.3389/fmed.2025.1591832
  4. Front Digit Health. 2025 ;7 1547045
       Background: The worst outcomes of diabetic retinopathy (DR) can be prevented by implementing DR screening programs assisted by AI. At the University Hospital of Navarre (HUN), Spain, general practitioners (GPs) grade fundus images in an ongoing DR screening program, referring to a second screening level (ophthalmologist) target patients.
    Methods: After collecting their requirements, HUN decided to develop a custom AI tool, called NaIA-RD, to assist their GPs in DR screening. This paper introduces NaIA-RD, details its implementation, and highlights its unique combination of DR and retinal image quality grading in a single system. Its impact is measured in an unprecedented before-and-after study that compares 19,828 patients screened before NaIA-RD's implementation and 22,962 patients screened after.
    Results: NaIA-RD influenced the screening criteria of 3/4 GPs, increasing their sensitivity. Agreement between NaIA-RD and the GPs was high for non-referral proposals (94.6% or more), but lower and variable (from 23.4% to 86.6%) for referral proposals. An ophthalmologist discarded a NaIA-RD error in most of contradicted referral proposals by labeling the 93% of a sample of them as referable. In an autonomous setup, NaIA-RD would have reduced the study visualization workload by 4.27 times without missing a single case of sight-threatening DR referred by a GP.
    Conclusion: DR screening was more effective when supported by NaIA-RD, which could be safely used to autonomously perform the first level of screening. This shows how AI devices, when seamlessly integrated into clinical workflows, can help improve clinical pathways in the long term.
    Keywords:  AI medical device; before-and-after study; decision-support system; deep learning; diabetic retinopathy
    DOI:  https://doi.org/10.3389/fdgth.2025.1547045
  5. SLAS Technol. 2025 Jun 28. pii: S2472-6303(25)00083-4. [Epub ahead of print]33 100325
      Diabetic retinopathy (DR) remains a key contributor to eye impairment worldwide, requiring the development of efficient and accurate deep learning models for automated diagnosis. This study presents FastEffNet, a novel framework that leverages transformer-based knowledge distillation (KD) to enhance DR severity classification while reducing computational complexity. The proposed approach employs FastViT-MA26 as the teacher model and EfficientNet-B0 as the student model, striking the ideal mix between accuracy and computational efficiency. APTOS blindness detection dataset comprising 3662 images across five severity classes is collected, pre-processed, normalized, split and augmented to address class imbalance. The teacher model undergoes training and validation before transferring its knowledge to the student model, enabling the latter to approximate the teacher's performance while maintaining a lightweight architecture. To comprehensively assess the efficacy of the proposed framework, additional student models-including HGNet, ResNet50, MobileNetV3, and DeiT-are analysed for comparative assessment. Model interpretability is enhanced through Grad-CAM++ visualizations, which highlight critical retinal regions influencing DR severity classification. Several measures are used to evaluate performance, including accuracy, precision, recall, F1-score, Cohen's Kappa Score (CKS), Weighted Kappa Score (WKS), and Matthews Correlation Coefficient (MCC), ensuring a robust assessment. Among all student models, EfficientNet-B0 achieves the highest classification accuracy of 95.39 %, 95.43 % precision, recall of 95.39 %, F1-score of 95.37 %, CKS of 0.94, WKS of 0.97, MCC of 0.94, AUC of 0.99, and a KD loss of 0.17, with a computational cost of 0.38 G FLOPs. These results demonstrate its effectiveness as an optimized lightweight model for DR detection. The findings emphasize the potential of KD-based lightweight models in attaining high diagnostic accuracy while reducing computational complexity, paving the way for scalable and cost-effective DR screening solutions.
    Keywords:  Deep learning; Diabetic retinopathy; EfficientNet-B0; FastViT; Grad-CAM++; Knowledge distillation; Transformer-based models
    DOI:  https://doi.org/10.1016/j.slast.2025.100325
  6. Diabetes Care. 2025 Jul 01. pii: dc250355. [Epub ahead of print]
       OBJECTIVE: To develop a multimodal model to predict chronic kidney disease (CKD) in patients with type 2 diabetes mellitus (T2DM), given the limited research on this integrative approach.
    RESEARCH DESIGN AND METHODS: We obtained multimodal data sets from Kyung Hee University Medical Center (n = 7,028; discovery cohort) for training and internal validation and UK Biobank (n = 1,544; validation cohort) for external validation. CKD was defined based on ICD-9 and ICD-10 codes and/or estimated glomerular filtration rate (eGFR) ≤60 mL/min/1.73 m2. We ensembled various deep learning models and interpreted their predictions using explainable artificial intelligence (AI) methods, including shapley additive explanations (SHAP) and gradient-weighted class activation mapping (Grad-CAM). Subsequently, we investigated the potential association between the model probability and vascular complications.
    RESULTS: The multimodal model, which ensembles visual geometry group 16 and deep neural network, presented high performance in predicting CKD, with area under the receiver operating characteristic curve of 0.880 (95% CI, 0.806-0.954) in the discovery cohort and 0.722 in the validation cohort. SHAP and Grad-CAM highlighted key predictors, including eGFR and optic disc, respectively. The model probability was associated with an increased risk of macrovascular complications (tertile 1 [T1]: adjusted hazard ratio, 1.42 [95% CI, 1.06-1.90]; T2: 1.59 [1.17-2.16]; T3: 1.64 [1.20-2.26]) and microvascular complications (T3: 1.30 [1.02-1.67]).
    CONCLUSIONS: Our multimodal AI model integrates fundus images and clinical data from binational cohorts to predict the risk of new-onset CKD within 5 years and associated vascular complications in patients with T2DM.
    DOI:  https://doi.org/10.2337/dc25-0355
  7. JMIR Diabetes. 2025 Jul 04. 10 e72874
       Background: Effective diabetes management requires precise glycemic control to prevent both hypoglycemia and hyperglycemia, yet existing machine learning (ML) and reinforcement learning (RL) approaches often fail to balance competing objectives. Traditional RL-based glucose regulation systems primarily focus on single-objective optimization, overlooking factors such as minimizing insulin overuse, reducing glycemic variability, and ensuring patient safety. Furthermore, these approaches typically rely on centralized data processing, which raises privacy concerns due to the sensitive nature of health care data. There is a critical need for a decentralized, privacy-preserving framework that can personalize blood glucose regulation while addressing the multiobjective nature of diabetes management.
    Objective: This study aimed to develop and validate PRIMO-FRL (Privacy-Preserving Reinforcement Learning for Individualized Multi-Objective Glycemic Management Using Federated Reinforcement Learning), a novel framework that optimizes clinical objectives-maximizing time in range (TIR), reducing hypoglycemia and hyperglycemia, and minimizing glycemic risk-while preserving patient privacy.
    Methods: We developed PRIMO-FRL, integrating multiobjective reward shaping to dynamically balance glucose stability, insulin efficiency, and risk reduction. The model was trained and tested using simulated data from 30 simulated patients (10 children, 10 adolescents, and 10 adults) generated with the Food and Drug Administration (FDA)-approved UVA/Padova simulator. A comparative analysis was conducted against state-of-the-art RL and ML models, evaluating performance using metrics such as TIR, hypoglycemia (<70 mg/dL), hyperglycemia (>180 mg/dL), and glycemic risk scores.
    Results: The PRIMO-FRL model achieved a robust overall TIR of 76.54%, with adults demonstrating the highest TIR at 81.48%, followed by children at 77.78% and adolescents at 70.37%. Importantly, the approach eliminated hypoglycemia, with 0.0% spent below 70 mg/dL across all cohorts, significantly outperforming existing methods. Mild hyperglycemia (180-250 mg/dL) was observed in adolescents (29.63%), children (22.22%), and adults (18.52%), with adults exhibiting the best control. Furthermore, the PRIMO-FRL approach consistently reduced glycemic risk scores, demonstrating improved safety and long-term stability in glucose regulation..
    Conclusions: Our findings highlight the potential of PRIMO-FRL as a transformative, privacy-preserving approach to personalized glycemic management. By integrating federated RL, this framework eliminates hypoglycemia, improves TIR, and preserves data privacy by decentralizing model training. Unlike traditional centralized approaches that require sharing sensitive health data, PRIMO-FRL leverages federated learning to keep patient data local, significantly reducing privacy risks while enabling adaptive and personalized glucose control. This multiobjective optimization strategy offers a scalable, secure, and clinically viable solution for real-world diabetes care. The ability to train personalized models across diverse populations without exposing raw data makes PRIMO-FRL well-suited for deployment in privacy-sensitive health care environments. These results pave the way for future clinical adoption, demonstrating the potential of privacy-preserving artificial intelligence in optimizing glycemic regulation while maintaining security, adaptability, and personalization.
    Keywords:  AI; blood glucose control; diabetes management; federated learning; federated reinforcement learning; multiobjective optimization; privacy-preserving artificial intelligence; reinforcement learning; reward shaping
    DOI:  https://doi.org/10.2196/72874
  8. Sci Rep. 2025 Jul 01. 15(1): 20962
      Diabetic foot Ulceration (DFU) is a severe complication of diabetic foot syndrome, often leading to amputation. In patients with neuropathy, ulcer formation is facilitated by elevated plantar tissue stress under insensate feet. This study presents a plantar pressure distribution analysis method to predict diabetic peripheral neuropathy. The Win-Track platform was used to gather clinical and plantar pressure data from 86 diabetic patients with different degrees of neuropathy. An automated image processing algorithm segmented plantar pressure images into forefoot and hindfoot regions for precise pressure distribution measurement. Comparative analysis of static and dynamic assessment showed that static analysis consistently outperformed dynamic methods. Gradient Boosting achieved the highest accuracy (88% dynamic, 100% static), with Random Forest and Decision Tree also performing well. Explainable AI techniques (SHAP, Eli5, Anchor Explanations) provided insights into feature importance, enhancing model interpretability. Additionally, a foot classification system based on the forefoot-hindfoot pressure ratio categorized feet as flat, regular, or arched. These findings support the development of improved diagnostic tools for early neuropathy detection, aiding risk stratification and prevention strategies. Enhanced screening can help reduce DFU incidence, lower amputation rates, and ultimately decrease diabetes-related mortality.
    Keywords:  Diabetic peripheral neuropathy; Explainable AI; Image segmentation; Machine learning; Plantar pressure analysis
    DOI:  https://doi.org/10.1038/s41598-025-07774-0
  9. BMC Cardiovasc Disord. 2025 Jul 03. 25(1): 448
       BACKGROUND: Latent autoimmune diabetes in adults (LADA) is a slowly progressing form of diabetes with autoimmune origins. Patients with LADA are at an elevated risk of developing cardiovascular diseases, including carotid atherosclerosis. While machine learning models have been widely used in predicting cardiovascular risks in Type 1 and Type 2 diabetes, research on LADA remains limited. Early prediction of carotid atherosclerosis using machine learning models could help in timely intervention and improved patient outcomes for this specific population.
    METHODS: We conducted a retrospective cross-sectional analysis involving 142 LADA patients diagnosed within the endocrinology department at Shanxi Bethune Hospital, China. Various clinical, demographic, and laboratory variables were analyzed using univariate and multivariate logistic regression, complemented by LASSO regression for feature selection. Additionally, eight machine learning algorithms-logistic regression (LR), decision tree (DT), random forests (RF), k-nearest neighbors (KNN), support vector machine (SVM), neural networks (NNET), eXtreme gradient boosting (XGBoost), and light gradient boosting machine (LightGBM)-were employed to predict carotid atherosclerosis.
    RESULTS: Significant risk factors for carotid atherosclerosis were identified, including age, smoking history, BMI, ALB, HDL-C, and ALT. Among the various machine learning models evaluated, the LR model exhibited the highest performance, achieving an area under the curve (AUC) of 0.936, alongside an accuracy of 86%. NNET and SVM models also demonstrated robust predictive capacities with AUC values of 0.919 and 0.918, respectively.
    CONCLUSIONS: This study highlights the critical role of identifying risk factors for carotid atherosclerosis in LADA patients. Our use of ML models builds on the growing body of work in diabetes-related cardiovascular risk prediction, and it offers a novel approach by specifically targeting the LADA population. Incorporating ML models into clinical practice can improve risk stratification and patient management in LADA. Future research should validate these models across diverse populations and investigate the underlying mechanisms linking LADA to cardiovascular risk.
    CLINICAL TRIAL NUMBER: Not applicable.
    Keywords:  Carotid atherosclerosis; Latent autoimmune diabetes in adults; Machine learning
    DOI:  https://doi.org/10.1186/s12872-025-04786-6
  10. Front Digit Health. 2025 ;7 1534830
       Introduction: Diabetes mellitus (DM) is a chronic condition defined by increased blood glucose that affects more than 500 million adults. Type 1 diabetes (T1D) needs to be treated with insulin. Keeping glucose within the desired range is challenging. Despite the advances in the mHealth field, the appearance of the do-it-yourself (DIY) tools, and the progress in glucose level prediction based on deep learning (DL), these tools fail to engage the users in the long-term. This limits the benefits that they could have on the daily T1D self-management, specifically by providing an accurate prediction of their short-term glucose level.
    Methods: This work proposed a DL-based DIY framework for interstitial glucose prediction using continuous glucose monitoring (CGM) data to generate one personalized DL model per user, without using data from other people. The DIY module reads the CGM raw data (as it would be uploaded by the potential users of this tool), and automatically prepares them to train and validate a DL model to perform glucose predictions up to one hour ahead. For training and validation, 1 year of CGM data collected from 29 subjects with T1D were used.
    Results and Discussion: Results showed prediction performance comparable to the state-of-the-art, using only CGM data. To the best of our knowledge, this work is the first one in providing a DL-based DIY approach for fully personalized glucose prediction. Moreover, this framework is open source and has been deployed in Docker, enabling its standalone use, its integration on a smartphone application, or the experimentation with novel DL architectures.
    Keywords:  continuous glucose monitoring; deep learning; mHealth; personalized medicine; type 1 diabetes
    DOI:  https://doi.org/10.3389/fdgth.2025.1534830