bims-aukdir Biomed News
on Automated knowledge discovery in diabetes research
Issue of 2025–08–17
sixteen papers selected by
Mott Given



  1. Comput Biol Med. 2025 Aug 13. pii: S0010-4825(25)01214-4. [Epub ahead of print]196(Pt C): 110863
      Diabetic Retinopathy (DR) causes abrasions of the retina in humans of various types. These abrasions cause vision loss, and in extreme cases, DR can cause blindness. Due to the lack of resources and expert opinion, the manual method of diagnosing DR is unreliable for timely treatment, and it takes a long time to overcome this issue. In this paper, Detection and Classification of DR in Retinal Fundus Images using Deep Spiking Q Network Optimized with Partial Reinforcement Optimizer (DCDR-RFI-DSQN-PRO) is proposed. Here, the input images are taken from Eye PACS fundus image (EPFI) dataset. The collected images are given to preprocessing. During preprocessing, Regularized Bias-Aware Ensemble Kalman Filter (RBAEKF) is applied for enhancing image quality and reducing noise. The pre-processing output is fed into feature extraction for extracting Grayscale statistic features: standard deviation, kurtosis, mean, skewness, and Haralick Texture features: contrast, entropy, energy and homogeneity using Time-Frequency Synchroextracting Transform (TFSET). The extracted features are supplied to the Deep Spiking Q Networks (DSQN) for classifying diabetic retinopathy as No DR, Moderate DR, Severe DR, Mild DR and PDR. Generally, DSQN not adopt any optimization strategies to define optimal parameters to classify DR. Hence, Partial Reinforcement Optimizer (PRO) is used to enhance the DSQN weight parameters by improving accuracy and reducing error rate to accurately classify DR images. The proposed DCDR-RFI-DSQN-PRO approach is implemented in Python. The performance metrics, such as precision, accuracy, recall, f1-score, specificity, ROC, error rate, computational time is evaluated. The DCDR-RFI-DSQN-PRO achieves 20.58 %, 26.73 %, 24.62 % better precision, 11.48 %, 17.73 %, 15.32 % better specificity and 20.98 %, 26.66 %, 16.32 % better f1-score, 10.78 %, 20.47 %, 12.86 % better RoC when compared to the existing models: Detection of DR utilizing convolutional neural networks for feature extraction with classification (DDR-CNN-FEC), A lesion-dependent diabetic retinopathy detection through hybrid deep learning (LBAD-HDL) and Automated diabetic retinopathy screening using deep learning (ADRS-DL) respectively.
    Keywords:  Deep spiking Q Networks; Eye PACS fundus image; Partial reinforcement optimizer; Regularized bias-aware ensemble Kalman filter; Time‐frequency synchroextracting transform
    DOI:  https://doi.org/10.1016/j.compbiomed.2025.110863
  2. Sci Rep. 2025 Aug 10. 15(1): 29266
      Diabetic Retinopathy (DR) is a complication caused by diabetes that can destroy the retina, leading to blurred vision and even blindness. We propose a multi-attention residual refinement architecture that enhances conventional CNN performance through three strategic modifications: class-specific multi-attention for diagnostic feature weighting, space-to-depth preprocessing for improved spatial information preservation, and Squeeze-and-Excitation blocks for enhanced representational capacity. Our framework demonstrates universal applicability across different CNN architectures (ResNet, DenseNet, EfficientNet, MobileNet), consistently achieving 2-5% performance improvements on the EyePACS dataset while maintaining computational efficiency. The attention mechanism provides interpretable visualizations that align with clinical pathological patterns, validating the model's diagnostic reasoning.
    Keywords:  Attention mechanism; Deep learning model; Diabetic retinopathy
    DOI:  https://doi.org/10.1038/s41598-025-15269-1
  3. Diagnostics (Basel). 2025 Aug 05. pii: 1966. [Epub ahead of print]15(15):
      Background/Objectives: Diabetic retinopathy is a leading cause of vision impairment worldwide, and the development of reliable automated classification systems is crucial for early diagnosis and clinical decision-making. This study presents a comprehensive comparative evaluation of two state-of-the-art deep learning families for the task of classifying diabetic retinopathy using retinal fundus images. Methods: The models were trained and tested in both binary and multi-class settings. The experimental design involved partitioning the data into training (70%), validation (20%), and testing (10%) sets. Model performance was assessed using standard metrics, including precision, sensitivity, specificity, F1-score, and the area under the receiver operating characteristic curve. Results: In binary classification, the ResNeXt101-64x4d model and RegNetY32GT model demonstrated outstanding performance, each achieving high sensitivity and precision. For multi-class classification, ResNeXt101-32x8d exhibited strong performance in early stages, while RegNetY16GT showed better balance across all stages, particularly in advanced diabetic retinopathy cases. To enhance transparency, SHapley Additive exPlanations were employed to visualize the pixel-level contributions for each model's predictions. Conclusions: The findings suggest that while ResNeXt models are effective in detecting early signs, RegNet models offer more consistent performance in distinguishing between multiple stages of diabetic retinopathy severity. This dual approach combining quantitative evaluation and model interpretability supports the development of more robust and clinically trustworthy decision support systems for diabetic retinopathy screening.
    Keywords:  RegNet; ResNeXt; SHAP; convolutional neural network; deep learning; diabetic retinopathy
    DOI:  https://doi.org/10.3390/diagnostics15151966
  4. Health Sci Rep. 2025 Aug;8(8): e71167
       Background and Aims: The application of machine learning (ML) has started to change some important aspects of health care in diabetes. We aimed to utilize a bibliometric approach to analyze and map ML in the context of diabetes.
    Methods: To build our data set, we searched from the Web of Science Core Collection (WoSCC) database, and restricted our search from January 1, 2010 to December 31, 2023. For citation analysis, the online services of WoS were used to investigate the information content of the data set, VOSviewer and Microsoft Excel 2013 were employed to construct and visualize the bibliographic data.
    Results: Overall, 5,222 results that met the criteria were retrieved. The trend of published studies indicates that the number of publications has steadily increased over the past 14 years. The most active country was found to be USA, followed by the China and India. The highest level of cooperation with other countries belonged to the USA. The most prolific author on ML in the context of diabetes was Tien Yin Wong, with twenty-two articles affiliated at Tsinghua University; after that, Pantelis Georgiou with twenty articles affiliated at the Imperial College London, and Pau Herrero, with nineteen articles affiliated at Tijuana Institute of Technology. The most prolific research areas were machine learning, prediction models, diabetic retinopathy, deep learning, and diagnostics.
    Conclusion: The results of this study are a rich scientific source of ML for diabetes to guide researchers. This study can guide policymakers, physicians, and practitioners to help in the decision-making process. In addition, the findings will be useful for governments to guide future budgets for target studies.
    Keywords:  artificial intelligence; bibliometric analysis; diabetes; machine learning; prediction model
    DOI:  https://doi.org/10.1002/hsr2.71167
  5. Appl Opt. 2025 Apr 20. 64(12): 3180-3192
      Optical coherence tomography (OCT) is being investigated in diabetic retinopathy (DR) diagnostics as a real-time evaluation tool. Currently, OCT images are the main methods for the diagnosis of patients with DR. Hyperreflective foci (HRF) are potential biomarkers for the diagnosis and prediction of the progression and prognosis of patients with DR. The development of artificial intelligence (AI) models for segmenting HRF is of great significance for the clinical diagnosis and treatment of patients with DR. The purpose of this study is to construct a deep-learning algorithm that automatically segments the HRF in OCT images, helping ophthalmologists make early diagnosis and evaluate the prognosis of patients with DR. In this paper, to investigate the algorithms that are appropriate for the segmentation of HRF, we propose an HRF segmentation algorithm on the basis of Attention U-Net. We fuse the features of each layer and use the fused multi-scale information to guide the generation of the attention map. Then, we embed a hybrid attention module of space and channel at the decoder end of the network to capture the spatial and channel correlations of the feature map, making the network focus on the location and channels related to the target region. We propose a novel algorithm, to our knowledge, based on Attention U-Net and the experimental results on 172 OCT images from 50 patients with DR demonstrated that our method is effective for the HRF segmentation. In five-fold cross-validation, the dice similarity coefficient (DSC), sensitivity (SE), and precision (P) reach 63.79±0.94, 66.66±2.54, and 67.10±1.96, respectively. The overall segmentation effect of this model surpasses that of the other four networks, and the HRF can be segmented more accurately and identified more easily. In a segment model, balancing SE and P is difficult. We developed an improved Attention U-Net that effectively segments HRF with high SE and P, outperforming other algorithms in HRF segmentation. This model holds significant potential for the early detection, treatment evaluation, and prognosis assessment of patients with diabetic retinopathy (DR).
    DOI:  https://doi.org/10.1364/AO.547758
  6. PLoS One. 2025 ;20(8): e0330381
       BACKGROUND: Coronary heart disease (CHD) and diabetes mellitus are highly prevalent in intensive care units (ICUs) and significantly contribute to high in-hospital mortality rates. Traditional risk stratification models often fail to capture the complex interactions among clinical variables, limiting their ability to accurately identify high-risk patients. Machine learning (ML) models, with their capacity to analyze large datasets and identify intricate patterns, provide a promising alternative for improving mortality prediction accuracy.
    OBJECTIVE: This study aims to develop and validate machine learning models for predicting in-hospital mortality in ICU patients with CHD and diabetes, and enhance model interpretability using SHapley Additive exPlanation (SHAP) values, thereby providing a more accurate and practical tool for clinicians.
    METHODS: We conducted a retrospective cohort study using data from the MIMIC-IV database, focusing on adult ICU patients with a primary diagnosis of CHD and diabetes. We extracted baseline characteristics, laboratory parameters, and clinical outcomes. The Boruta algorithm was employed for feature selection to identify variables significantly associated with in-hospital mortality, and 16 machine learning models, including logistic regression, random forest, gradient boosting, and neural networks, were developed and compared using receiver operating characteristic (ROC) curves and area under the curve (AUC) analysis. SHAP values were used to explain variable importance and enhance model interpretability.
    RESULTS: Our study included 2,213 patients, of whom 345 (15.6%) experienced in-hospital mortality. The Boruta algorithm identified 29 significant risk factors, and the top 13 variables were used for developing machine learning models. The gradient boosting classifier achieved the highest AUC of 0.8532, outperforming other models. SHAP analysis highlighted age, blood urea nitrogen, and pH as the most important predictors of mortality. SHAP waterfall plots provided detailed individualized risk assessments, demonstrating the model's ability to identify high-risk subgroups effectively.
    CONCLUSIONS: Machine learning models, especially the gradient boosting classifier, demonstrated superior performance in predicting in-hospital mortality in ICU patients with CHD and diabetes, outperforming traditional statistical methods. These models provide valuable insights for risk stratification and have the potential to improve clinical outcomes. Future work should focus on external validation and clinical implementation to further enhance their applicability and effectiveness in managing this high-risk population.
    DOI:  https://doi.org/10.1371/journal.pone.0330381
  7. J Multidiscip Healthc. 2025 ;18 4643-4651
      The global aging population is expanding at an unprecedented rate and is projected to reach 2 billion by 2050, presenting significant medical challenges, particularly multimorbidity and heterogeneous responses to treatment. Using diabetes as an illustrative case, this study explores the transformative potential of artificial intelligence (AI)-assisted clinical decision-making to advance personalized precision medicine for older adults. Through systematic analysis of current healthcare practices and emerging AI technologies, we examined the integration of machine learning algorithms, natural language processing, and intelligent monitoring systems into diabetes care for elderly populations. Based on current evidence showing up to 25% reduction in hospitalization rates and 30% increase in treatment adherence, we argue that AI integration represents a transformative approach to improving clinical outcomes in elderly diabetes care. We contend that AI-driven clinical decision support systems (CDSS) offer superior performance in risk prediction and treatment optimization, with studies demonstrating diagnostic accuracy rates of up to 93.07%, supporting our argument for their widespread implementation. Furthermore, AI-enhanced monitoring systems improved medication adherence by 17.9% compared to conventional monitoring approaches. Nonetheless, several challenges persist, including issues related to data standardization, algorithm transparency, and patient privacy protection. These results underscore the necessity of adopting a balanced implementation strategy that addresses both technical limitations and ethical considerations, while upholding patient autonomy. This perspective emphasizes the critical importance of multidisciplinary collaboration among healthcare professionals, technology developers, and regulatory authorities in establishing a comprehensive framework for AI deployment in clinical settings. By demonstrating the capacity of AI-assisted clinical decision-making to enhance healthcare quality and efficiency for elderly patients with diabetes, this study makes a meaningful contribution to the evolving field of personalized medicine.
    Keywords:  aged; clinical; decision support systems; diabetes mellitus; machine learning; medical informatics; telemedicine; type 2
    DOI:  https://doi.org/10.2147/JMDH.S529190
  8. Sci Rep. 2025 Aug 12. 15(1): 29521
      T2DM is a major risk factor for CHD. In recent years, machine learning algorithms have demonstrated significant advantages in improving predictive accuracy; however, studies applying these methods for clinical prediction and diagnosis of CHD-DM2 remain limited. This study aims to evaluate the performance of machine learning models and to develop an interpretable model to identify critical risk factors of CHD-DM2, thereby supporting clinical decision-making. Data were collected from cardiovascular inpatients admitted to the First Affiliated Hospital of Xinjiang Medical University between 2001 and 2018. A total of 12,400 patients were included, comprising 10,257 cases of CHD and 2143 cases of CHD-DM2.To address the class imbalance in the dataset, the SMOTENC algorithm was applied in conjunction with the themis package for data preprocessing. Final predictors were identified through a combined approach of univariate analysis and Lasso regression. We then developed and validated seven machine learning models: Logistic, Logistic_Lasso, KNN, SVM, XGBoost, RF, and LightGBM. The predictive performance of the five models was compared using evaluation metrics including accuracy, sensitivity, specificity, AUC, ROC and DCA. Additionally, SHAP values were employed to provide interpretability of the model outputs. The dataset was split into a training set (n = 8460) and a validation set (n = 3680) at a 7:3 ratio. A total of 25 predictive variables were ultimately identified through Lasso regression analysis. Among the seven machine learning models, the RF model demonstrated significantly superior performance and achieved the highest net benefit in the DCA. According to SHAP analysis, Diabetes.History, BG, and HbA1c were identified as the top contributors to CHD-DM2 risk. This study identified Diabetes.History, blood glucose (BG), and HbA1c as the primary risk factors for CHD-DM2. It is recommended that hospitals enhance monitoring of such patients, document the presence of high-risk factors, and implement targeted intervention strategies accordingly.
    Keywords:  Coronary heart disease combined with type 2 diabetes; Imbalance processing; Machine learning; SHAP
    DOI:  https://doi.org/10.1038/s41598-025-11142-3
  9. Appl Opt. 2025 Mar 01. 64(7): 1668-1676
      Diabetic macular ischemia (DMI) is a critical vision-threatening complication of diabetic retinopathy. While optical coherence tomography angiography (OCTA) enables non-invasive DMI progression diagnosis, acquiring labeled datasets remains challenging. This study proposes a self-supervised learning framework for DMI grading that leverages both pixel- and frequency-domain features. By using a pixel-level transformer and global filter transformer, the approach extracts aggregated features from OCTA images through contrastive pretext tasks. Experimental validation on three retinal disease benchmarks demonstrates superior performance compared to state-of-the-art methods, with the framework showing promising capabilities in lesion area recognition and potential clinical diagnostic assistance.
    DOI:  https://doi.org/10.1364/AO.550755
  10. IEEE J Biomed Health Inform. 2025 Aug 14. PP
      Newly diagnosed Type 1 Diabetes (T1D) patients often struggle to obtain effective Blood Glucose (BG) prediction models due to the lack of sufficient BG data from Continuous Glucose Monitoring (CGM), presenting a significant "cold start" problem in patient care. Utilizing population models to address this challenge is a potential solution, but collecting patient data for training population models in a privacy-conscious manner is challenging, especially given that such data is often stored on personal devices. Considering the privacy protection and addressing the "cold start" problem in diabetes care, we propose "GluADFL", blood Glucose prediction by Asynchronous Decentralized Federated Learning. We compared GluADFL with eight baseline methods using four distinct T1D datasets, comprising 298 participants, which demonstrated its superior performance in accurately predicting BG levels for cross-patient analysis. Furthermore, patients' data might be stored and shared across various communication networks in GluADFL, ranging from highly interconnected (e.g., random, performs the best among others) to more structured topologies (e.g., cluster and ring), suitable for various social networks. The asynchronous training framework supports flexible participation. By adjusting the ratios of inactive participants, we found it remains stable if less than 70% are inactive. Our results confirm that GluADFL offers a practical, privacy-preserved solution for BG prediction in T1D, significantly enhancing the quality of diabetes management.
    DOI:  https://doi.org/10.1109/JBHI.2025.3573954
  11. Sensors (Basel). 2025 Jul 26. pii: 4647. [Epub ahead of print]25(15):
      More than 14% of the world's population suffered from diabetes mellitus in 2022. This metabolic condition is defined by increased blood glucose concentrations. Among the different types of diabetes, type 1 diabetes, caused by a lack of insulin secretion, is particularly challenging to treat. In this regard, automatic glucose level estimation implements Continuous Glucose Monitoring (CGM) devices, showing positive therapeutic outcomes. AI-based glucose prediction has commonly followed a deterministic approach, usually with a lack of interpretability. Therefore, these AI-based methods do not provide enough information in critical decision-making scenarios, like in the medical field. This work intends to provide accurate, interpretable, and personalized glucose prediction using the Temporal Fusion Transformer (TFT), and also includes an uncertainty estimation. The TFT was trained using two databases, an in-house-collected dataset and the OhioT1DM dataset, commonly used for glucose forecasting benchmarking. For both datasets, the set of input features to train the model was varied to assess their impact on model interpretability and prediction performance. Models were evaluated using common prediction metrics, diabetes-specific metrics, uncertainty estimation, and interpretability of the model, including feature importance and attention. The obtained results showed that TFT outperforms existing methods in terms of RMSE by at least 13% for both datasets.
    Keywords:  artificial intelligence; deep learning; explainable AI; glucose prediction; mHealth; personalized medicine; transformers
    DOI:  https://doi.org/10.3390/s25154647
  12. Sci Rep. 2025 Aug 09. 15(1): 29137
      Accurate forecasting of diabetes burden is essential for healthcare planning, resource allocation, and policy-making. While deep learning models have demonstrated superior predictive capabilities, their real-world applicability is constrained by computational complexity and data quality challenges. This study evaluates the trade-offs between predictive accuracy, robustness, and computational efficiency in diabetes forecasting. Four forecasting models were selected based on their ability to capture temporal dependencies and handle missing healthcare data: Transformer with Variational Autoencoder (VAE), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and AutoRegressive Integrated Moving Average (ARIMA). Annual data on Disability-Adjusted Life Years (DALYs), Deaths, and Prevalence from 1990 to 2021 were used to train (1990-2014) and evaluate (2015-2021) the models. Performance was measured using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). Robustness tests introduced noise and missing data, while computational efficiency was assessed in terms of training time, inference speed, and memory usage. Statistical significance was analyzed using ANOVA and Tukey's post-hoc tests. The Transformer-VAE model achieved the highest predictive accuracy (MAE: 0.425, RMSE: 0.501) and demonstrated superior resilience to noisy and incomplete data ([Formula: see text]). LSTM effectively captured short-term patterns but struggled with long-term dependencies, while GRU, though computationally efficient, exhibited higher error rates. ARIMA, despite being resource-efficient, showed limited capability in modeling long-term trends, indicating potential benefits in hybrid approaches. While Transformer-VAE provides the most accurate diabetes burden forecasting, its high computational cost and interpretability challenges limit its scalability in resource-constrained settings. These findings highlight the potential of deep learning models for healthcare forecasting, while underscoring the need for further validation before integration into real-world public health decision-making.
    Keywords:  Artificial intelligence; Deep learning; Diabetes mellitus; Disability-adjusted life years; Forecasting; Global burden of disease; Public health forecasting applications; Robustness to noisy and incomplete data
    DOI:  https://doi.org/10.1038/s41598-025-14599-4
  13. Front Endocrinol (Lausanne). 2025 ;16 1610884
       Background: This study aims to improve the surgical cure rate, develop interventions to reduce the incidence of postoperative nonunion or recurrence of diabetic foot wounds, and formulate an optimal prediction model to quantify the predictive risk value of antibiotic bone-cement failure in the treatment of diabetic foot.
    Methods: The training and test sets were created once the cases were collected. Based on feature correlation, feature importance, and feature weight, LASSO analysis, random forest, and the Pearson correlation coefficient approach were used to identify the features. Artificial neural network, support vector machine, and XGBoost prediction models were built according to the selected optimal features. The receiver operating characteristic curve, precision-recall (PR) curve, and decision curve analysis were utilized to validate the performance of the models and select the optimal prediction model. Lastly, an independent test set was created to assess and determine the best model's capacity for generalization.
    Results: A comparative analysis revealed that the area under the curve (AUC) for the training set of the PRL-XGBoost prediction model was 0.85 and that for the test set was 0.71. This finding suggests that the model exhibits good predictive ability. Moreover, the PR-AUC value of the prediction model was 0.97, indicating that it demonstrates good resistance to overfitting. Additionally, the DCA curve showed that the PRL-XGBoost prediction model has significant application value and practicality. Therefore, PRL-XGBoost was found to be the most effective prediction model.
    Conclusions: The findings from this study prove that γ-glutamyl transpeptidase, lipoprotein A, peripheral vascular disease, peripheral neuropathy, and white blood cells are the key indices that affect the surgical outcome. These parameters determine the nutritional and immune status of the lower limb endings, leading to ulceration, infection, and nonunion of the diabetic foot. Hence, the PRL-XGBoost prediction model can be applied for the preoperative evaluation and screening of patients with diabetic foot treated with antibiotic bone cement, resulting in favorable clinical outcomes.
    Keywords:  XGBoost; antibiotic bone cement; decision curve analysis; diabetes mellitus; diabetic foot; diabetic foot ulceration; feature selection
    DOI:  https://doi.org/10.3389/fendo.2025.1610884
  14. Int J Cardiol. 2025 Aug 08. pii: S0167-5273(25)00786-7. [Epub ahead of print]441 133743
       BACKGROUND: Heart failure (HF) and diabetes mellitus (DM) frequently coexist, exacerbating disease progression and increasing hospital readmission risk. Accurate prediction of readmission in HF patients with DM remains a clinical challenge. This study aims to develop and validate a machine learning (ML)-based model incorporating inflammatory and metabolic biomarkers to enhance risk stratification.
    METHODS: This retrospective cohort study included HF patients with DM hospitalized between January 2020 and February 2024. A total of 716 patients were randomly divided into training (70 %) and validation (30 %) sets. Seven ML models were developed using clinical parameters, inflammatory markers, and metabolic indices. Model performance was assessed using the area under the receiver operating characteristic curve (AUC-ROC), calibration, sensitivity, specificity, and Brier score, among others. External validation was conducted using an independent cohort of 687 patients. SHapley Additive Explanations (SHAP) analysis was applied for model interpretability, and a web-based dynamic nomogram was developed for clinical implementation.
    RESULTS: Among 716 patients, 256 (35.8 %) were readmitted within one year. The random forest (RF) model demonstrated superior performance (AUC = 0.87, Brier score = 0.151), outperforming other ML models. External validation confirmed its generalizability (AUC = 0.82). SHAP analysis identified age, brain natriuretic peptide (BNP), New York Heart Association (NYHA) class, HF classification, and triglyceride-glucose body mass index (TYG-BMI) as key predictors. The dynamic nomogram provided individualized risk predictions, enhancing clinical applicability.
    CONCLUSIONS: This study developed an ML-based model integrating inflammatory and metabolic biomarkers for predicting readmission in HF patients with DM. The model demonstrated robust performance and interpretability, showing potential as a supportive tool for early risk identification and personalized risk communication in clinical settings.
    Keywords:  Diabetes mellitus; Heart failure; Machine learning; Metabolic biomarkers; Metabolism; Readmission
    DOI:  https://doi.org/10.1016/j.ijcard.2025.133743
  15. J Diabetes Sci Technol. 2025 Aug 14. 19322968251353228
      New methods of continuous glucose monitoring (CGM) data analysis are emerging that are valuable for interpreting CGM patterns and underlying metabolic physiology. These new methods use functional data analysis and artificial intelligence (AI), including machine learning (ML). Compared to traditional metrics for evaluating CGM tracing results (CGM Data Analysis 1.0), these new methods, which we refer to as CGM Data Analysis 2.0, can provide a more detailed understanding of glucose fluctuations and trends and enable more personalized and effective diabetes management strategies once translated into practical clinical solutions.
    Keywords:  CGM; artificial intelligence; diabetes; machine learning; pattern analysis
    DOI:  https://doi.org/10.1177/19322968251353228
  16. Acad Radiol. 2025 Aug 08. pii: S1076-6332(25)00714-7. [Epub ahead of print]
       RATIONALE AND OBJECTIVES: Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN.
    MATERIALS AND METHODS: In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model.
    RESULTS: The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model.
    CONCLUSION: The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.
    Keywords:  Co-plane; Diabetic peripheral neuropathy; Machine learning; Tibial nerve; Ultrasound
    DOI:  https://doi.org/10.1016/j.acra.2025.07.044