bims-aukdir Biomed News
on Automated knowledge discovery in diabetes research
Issue of 2025–06–15
eight papers selected by
Mott Given



  1. Clin Exp Ophthalmol. 2025 Jun 09.
       BACKGROUND: Artificial intelligence (AI) enhanced retinal screening could reduce the impact of diabetic retinopathy (DR), the leading cause of preventable blindness in Australia. This study assessed the performance and validity of a dual-modality, deep learning system for detection of vision-threatening diabetic retinopathy (vtDR) in a multi-ethnic community.
    METHODS: Cross-sectional (algorithm-validation) study with the deep learning system assessing fundus photographs for gradability and severity of DR, and optical coherence tomography (OCT) scans for diabetic macular oedema (DMO). Internal validation of each algorithm was performed using a computer-randomised 80:20 split. External validation was by comparison to standard grading provided by two ophthalmologists in 748 prospectively recruited persons with diabetes (age ≥ 10) from hospital diabetes clinics and a general practice. Main outcome measures included sensitivity, specificity and the area under the receiver operating characteristic curve (AUC).
    RESULTS: Internal validation revealed robust test characteristics. When compared to ophthalmologists, the system achieved an AUC of 0.92 (95% CI 0.90-0.94) for fundus photograph gradeability, 0.91 (95% CI 0.85-0.94) for the diagnosis of severe non-proliferative DR/proliferative DR and 0.90 (95% CI 0.87-0.96) for DMO detection from OCT scans. It demonstrated real-world applicability with an AUC of 0.94 (95% CI 0.91-0.97), sensitivity of 92.7% and specificity of 95.5% for detection of vtDR. Ungradable images occurred in 55 participants (7.4%).
    CONCLUSIONS: The dual-modality, deep learning system can diagnose vtDR from fundus photographs and OCT scans with high levels of accuracy and specificity. This could support a novel model of care for DR screening in our community.
    Keywords:  artificial intelligence; deep learning; diabetic retinopathy; dual‐modality; screening
    DOI:  https://doi.org/10.1111/ceo.14560
  2. IEEE J Biomed Health Inform. 2025 Jun 09. PP
      Diabetic Retinopathy (DR), a prevalent diabetes complication leading to blindness, often goes undetected until late stages due to patients seeking help only when symptoms manifest and limited experts' availability. To address these challenges, we present a novel temporal integrative machine learning system that harnesses both fundus images and electronic health records (EHR) for early and enhanced DR detection. Our system uniquely processes EHR data by focusing on temporal trends and long-term patient histories, creating thousands of temporal features that capture their evolving dynamics over time and deliver unparalleled model finesse. This dual-model system includes a temporal tabular model that relies solely on historical medical records and a deep learning multi-modal model that combines these records with fundus images. The models were trained and tested using real clinical data from 5,000 patients at Soroka Hospital in Israel, comprising 25,000 retinal images collected over 8 years and electronic health records spanning up to 20 years. Given the primarily unlabeled nature of the data, the training phase employed a pseudo-labeling technique. The models were evaluated and verified by a retina specialist, surpassing existing models with AUROC scores of 0.881 for the temporal-trend EHR model and 0.988 for the multi-modal imaging + EHR model. The integration of historical temporal medical data with imaging offers a more dynamic and comprehensive machine-learning system, enhancing DR detection and offering new insights into associated risk factors. This system not only aids physicians in obtaining a holistic view of a patient's health over time but also facilitates fast identification of individuals at high risk for DR.
    DOI:  https://doi.org/10.1109/JBHI.2025.3578197
  3. Front Clin Diabetes Healthc. 2025 ;6 1547689
      
    Keywords:  artificial intelligence; challenges; deep learning; diabetes management; machine learning
    DOI:  https://doi.org/10.3389/fcdhc.2025.1547689
  4. AMIA Jt Summits Transl Sci Proc. 2025 ;2025 95-104
      Machine Learning (ML) algorithms are vital for supporting clinical decision-making in biomedical informatics. However, their predictive performance can vary across demographic groups, often due to the underrepresentation of historically marginalized populations in training datasets. The investigation reveals widespread sex- and age-related inequities in chronic disease datasets and their derived ML models. Thus, a novel analytical framework is introduced, combining systematic arbitrariness with traditional metrics like accuracy and data complexity. The analysis of data from over 25,000 individuals with chronic diseases revealed mild sex-related disparities, favoring predictive accuracy for males, and significant age-related differences, with better accuracy for younger patients. Notably, older patients showed inconsistent predictive accuracy across seven datasets, linked to higher data complexity and lower model performance. This highlights that representativeness in training data alone does not guarantee equitable outcomes, and model arbitrariness must be addressed before deploying models in clinical settings.
  5. Front Endocrinol (Lausanne). 2025 ;16 1571362
      Prediabetes represents an early stage of glucose metabolism disorder with significant public health implications. Although traditional lifestyle interventions have demonstrated some efficacy in preventing the progression to type 2 diabetes, their limitations-such as lack of personalization, restricted real-time monitoring, and delayed intervention-are increasingly apparent. This article systematically explores the potential applications of continuous glucose monitoring (CGM) technology combined with artificial intelligence (AI) in the management of prediabetes. CGM provides real-time and dynamic glucose monitoring, addressing the shortcomings of conventional methods, while AI enhances the clinical utility of CGM data through deep learning and advanced data analysis. This review examines the advantages of integrating CGM and AI from three perspectives: precise diagnosis, personalized intervention, and decision support. Additionally, it highlights the unique roles of this integration in remote monitoring, shared decision-making, and patient empowerment. The article further discusses challenges related to data management, algorithm optimization, ethical considerations, and future directions for this technological integration. It proposes fostering multidisciplinary collaboration to promote the application of these innovations in diabetes management, aiming to deliver a more precise and efficient health management model for individuals with prediabetes.
    Keywords:  artificial intelligence; continuous glucose monitoring; fasting blood glucose; prediabetes; type 2 diabetes mellitus
    DOI:  https://doi.org/10.3389/fendo.2025.1571362
  6. Front Endocrinol (Lausanne). 2025 ;16 1579640
       Background: AI-assisted blood glucose management has become a promising method to enhance diabetes care, leveraging technologies like continuous glucose monitoring (CGM) and predictive models. A comprehensive bibliometric analysis is needed to understand the evolving trends in this research area.
    Methods: A bibliometric analysis was performed on 482 articles from the Web of Science Core Collection, focusing on AI in blood glucose management. Data were analyzed using CiteSpace and VOSviewer to explore research trends, influential authors, and global collaborations.
    Results: The study revealed a substantial increase in publications, particularly after 2016. Major research clusters included CGM, machine learning algorithms, and predictive modeling. The United States, Italy, and the UK were prominent contributors, with key journals such as Diabetes Technology & Therapeutics leading the field.
    Conclusion: AI technologies are significantly advancing blood glucose management, especially through machine learning and predictive models. Future research should focus on clinical integration and improving accessibility to enhance patient outcomes.
    Keywords:  AI; blood glucose management; continuous glucose monitoring; diabetes; machine learning
    DOI:  https://doi.org/10.3389/fendo.2025.1579640
  7. JMIR Med Inform. 2025 Jun 13. 13 e72238
       Background: Insulin resistance (IR), a precursor to type 2 diabetes and a major risk factor for various chronic diseases, is becoming increasingly prevalent in China due to population aging and unhealthy lifestyles. Current methods like the gold-standard hyperinsulinemic-euglycemic clamp has limitations in practical application. The development of more convenient and efficient methods to predict and manage IR in nondiabetic populations will have prevention and control value.
    Objective: This study aimed to develop and validate a machine learning prediction model for IR in a nondiabetic population, using low-cost diagnostic indicators and questionnaire surveys.
    Methods: A cross-sectional study was conducted for model development, and a retrospective cohort study was used for validation. Data from 17,287 adults with normal fasting blood glucose who underwent physical exams and completed surveys at the Health Management Center of Xiangya Third Hospital, Central South University, from January 2018 to August 2022, were analyzed. IR was assessed using the Homeostasis Model Assessment (HOMA-IR) method. The dataset was split into 80% (13,128/16,411) training and 20% (32,83/16,411) testing. A total of 5 machine learning algorithms, namely random forest, Light Gradient Boosting Machine (LightGBM), Extreme Gradient Boosting, Gradient Boosting Machine, and CatBoost were used. Model optimization included resampling, feature selection, and hyperparameter tuning. Performance was evaluated using F1-score, accuracy, sensitivity, specificity, area under the curve (AUC), and Kappa value. Shapley Additive Explanations analysis was used to assess feature importance. For clinical implication investigation, a different retrospective cohort of 20,369 nondiabetic participants (from the Xiangya Third Hospital database between January 2017 and January 2019) was used for time-to-event analysis with Kaplan-Meier survival curves.
    Results: Data from 16,411 nondiabetic individuals were analyzed. We randomly selected 13,128 participants for the training group, and 3283 participants for the validation group. The final model included 34 lifestyle-related questionnaire features and 17 biochemical markers. In the validation group, their AUC were all greater than 0.90. In the test group, all AUC were also greater than 0.80. The LightGBM model showed the best IR prediction performance with an accuracy of 0.7542, sensitivity of 0.6639, specificity of 0.7642, F1-score of 0.6748, Kappa value of 0.3741, and AUC of 0.8456. Top 10 features included BMI, fasting blood glucose, high-density lipoprotein cholesterol, triglycerides, creatinine, alanine aminotransferase, sex, total bilirubin, age, and albumin/globulin ratio. In the validation queue, all participants were separated into the high-risk IR group and the low-risk IR group according to the LightGBM algorithm. Out of 5101 high-risk IR participants, 235 (4.6%) developed diabetes, while 137 (0.9%) of 15,268 low-risk IR participants did. This resulted in a hazard ratio of 5.1, indicating a significantly higher risk for the high-risk IR group.
    Conclusions: By leveraging low-cost laboratory indicators and questionnaire data, the LightGBM model effectively predicts IR status in nondiabetic individuals, aiding in large-scale IR screening and diabetes prevention, and it may potentially become an efficient and practical tool for insulin sensitivity assessment in these settings.
    Keywords:  Light Gradient Boosting Machine; diabetes; diabetes prevention; insulin resistance ; machine learning
    DOI:  https://doi.org/10.2196/72238
  8. Ophthalmol Sci. 2025 Sep-Oct;5(5):5(5): 100804
       Purpose: To explore clinically significant diabetic retinal neurodegeneration in OCT images using explainable artificial intelligence (XAI) and subsequent evaluation by retinal specialists.
    Design: A single-center, retrospective, consecutive case series.
    Participants: Three hundred ninety-seven eyes from 397 diabetic retinopathy patients for XAI-based screening and 244 fellow eyes for subjective human evaluation.
    Methods: We acquired 30° horizontal OCT images centered on the fovea. An artificial intelligence (AI) model was developed to infer visual acuity (VA) reduction using fine-tuned RETFound-OCT. Attention maps highlighting regions contributing to VA inference were generated using layer-wise relevance propagation. Retinal specialists assessed OCT findings based on salient regions indicated by XAI. Two newly described findings, a needle-like appearance of the ganglion cell layer (GCL)/inner plexiform layer (IPL) ("ice-pick sign") and dot-like alterations in the outer nuclear layer (ONL) ("salt-and-pepper sign"), were evaluated alongside 2 established findings: EZ disruption and choroidal hypertransmission.
    Main Outcome Measures: Identification of clinically significant OCT findings associated with diabetic retinal neurodegeneration.
    Results: The AI model effectively discriminated eyes with poor vision (decimal VA ≤0.5) from those with good vision (VA ≥1.0) (area under the receiver operating characteristic curve of 0.947). Explainable artificial intelligence-based attention maps highlighted salient regions in the GCL/IPL (65.2% or 70.0%), ONL (52.2% or 28.3%), EZ (39.1% or 21.7%), and choroid (26.1% or 5.00%) in eyes with poor or good vision, respectively. Subjective evaluation by retinal specialists revealed the frequencies of these 4 findings as follows: ice-pick sign (32.4%), EZ disruption (25.0%), salt-and-pepper sign (16.0%), and choroidal hypertransmission (13.5%). Eyes with decimal VA ≤0.9 had these findings more frequently than those with VA ≥1.0 (P < 0.001 for all comparisons). Salt-and-pepper sign and choroidal hypertransmission exhibited high specificity for identifying eyes with poor vision. Statistical analyses demonstrated more significant associations between EZ disruption, salt-and-pepper sign, and hypertransmission compared with their relationships with the ice-pick sign.
    Conclusions: Artificial intelligence-assisted exploration of OCT findings identified 2 established lesions and 2 novel OCT biomarkers indicative of clinically significant diabetic retinal neurodegeneration.
    Financial Disclosures: The author(s) have no proprietary or commercial interest in any materials discussed in this article.
    Keywords:  Diabetic retinal neurodegeneration; Explainable artificial intelligence; Foundation model; OCT; Visual acuity
    DOI:  https://doi.org/10.1016/j.xops.2025.100804