bims-arihec Biomed News
on Artificial intelligence in healthcare
Issue of 2020–01–19
twenty-two papers selected by
Céline Bélanger, Cogniges Inc.



  1. Anesthesiology. 2020 Feb;132(2): 379-394
      Artificial intelligence has been advancing in fields including anesthesiology. This scoping review of the intersection of artificial intelligence and anesthesia research identified and summarized six themes of applications of artificial intelligence in anesthesiology: (1) depth of anesthesia monitoring, (2) control of anesthesia, (3) event and risk prediction, (4) ultrasound guidance, (5) pain management, and (6) operating room logistics. Based on papers identified in the review, several topics within artificial intelligence were described and summarized: (1) machine learning (including supervised, unsupervised, and reinforcement learning), (2) techniques in artificial intelligence (e.g., classical machine learning, neural networks and deep learning, Bayesian methods), and (3) major applied fields in artificial intelligence.The implications of artificial intelligence for the practicing anesthesiologist are discussed as are its limitations and the role of clinicians in further developing artificial intelligence for use in clinical care. Artificial intelligence has the potential to impact the practice of anesthesiology in aspects ranging from perioperative support to critical care delivery to outpatient pain management.
    DOI:  https://doi.org/10.1097/ALN.0000000000002960
  2. Sci Rep. 2020 Jan 14. 10(1): 205
      Severely burned and non-burned trauma patients are at risk for acute kidney injury (AKI). The study objective was to assess the theoretical performance of artificial intelligence (AI)/machine learning (ML) algorithms to augment AKI recognition using the novel biomarker, neutrophil gelatinase associated lipocalin (NGAL), combined with contemporary biomarkers such as N-terminal pro B-type natriuretic peptide (NT-proBNP), urine output (UOP), and plasma creatinine. Machine learning approaches including logistic regression (LR), k-nearest neighbor (k-NN), support vector machine (SVM), random forest (RF), and deep neural networks (DNN) were used in this study. The AI/ML algorithm helped predict AKI 61.8 (32.5) hours faster than the Kidney Disease and Improving Global Disease Outcomes (KDIGO) criteria for burn and non-burned trauma patients. NGAL was analytically superior to traditional AKI biomarkers such as creatinine and UOP. With ML, the AKI predictive capability of NGAL was further enhanced when combined with NT-proBNP or creatinine. The use of AI/ML could be employed with NGAL to accelerate detection of AKI in at-risk burn and non-burned trauma patients.
    DOI:  https://doi.org/10.1038/s41598-019-57083-6
  3. Sci Rep. 2020 Jan 13. 10(1): 170
      Tracking the fluctuations in blood glucose levels is important for healthy subjects and crucial diabetic patients. Tight glucose monitoring reduces the risk of hypoglycemia, which can result in a series of complications, especially in diabetic patients, such as confusion, irritability, seizure and can even be fatal in specific conditions. Hypoglycemia affects the electrophysiology of the heart. However, due to strong inter-subject heterogeneity, previous studies based on a cohort of subjects failed to deploy electrocardiogram (ECG)-based hypoglycemic detection systems reliably. The current study used personalised medicine approach and Artificial Intelligence (AI) to automatically detect nocturnal hypoglycemia using a few heartbeats of raw ECG signal recorded with non-invasive, wearable devices, in healthy individuals, monitored 24 hours for 14 consecutive days. Additionally, we present a visualisation method enabling clinicians to visualise which part of the ECG signal (e.g., T-wave, ST-interval) is significantly associated with the hypoglycemic event in each subject, overcoming the intelligibility problem of deep-learning methods. These results advance the feasibility of a real-time, non-invasive hypoglycemia alarming system using short excerpts of ECG signal.
    DOI:  https://doi.org/10.1038/s41598-019-56927-5
  4. J Am Med Inform Assoc. 2020 Jan 17. pii: ocz211. [Epub ahead of print]
       OBJECTIVES: Current machine learning models aiming to predict sepsis from electronic health records (EHR) do not account 20 for the heterogeneity of the condition despite its emerging importance in prognosis and treatment. This work demonstrates the added value of stratifying the types of organ dysfunction observed in patients who develop sepsis in the intensive care unit (ICU) in improving the ability to recognize patients at risk of sepsis from their EHR data.
    MATERIALS AND METHODS: Using an ICU dataset of 13 728 records, we identify clinically significant sepsis subpopulations with distinct organ dysfunction patterns. We perform classification experiments with random forest, gradient boost trees, and support vector machines, using the identified subpopulations to distinguish patients who develop sepsis in the ICU from those who do not.
    RESULTS: The classification results show that features selected using sepsis subpopulations as background knowledge yield a superior performance in distinguishing septic from non-septic patients regardless of the classification model used. The improved performance is especially pronounced in specificity, which is a current bottleneck in sepsis prediction machine learning models.
    CONCLUSION: Our findings can steer machine learning efforts toward more personalized models for complex conditions including sepsis.
    Keywords:  artificial intelligence in medicine; machine learning; sepsis; sepsis prediction; sepsis subtypes
    DOI:  https://doi.org/10.1093/jamia/ocz211
  5. Breast. 2019 Dec 19. pii: S0960-9776(19)31214-7. [Epub ahead of print]49 267-273
      Breast cancer is the most common cancer and second leading cause of cancer-related death worldwide. The mainstay of breast cancer workup is histopathological diagnosis - which guides therapy and prognosis. However, emerging knowledge about the complex nature of cancer and the availability of tailored therapies have exposed opportunities for improvements in diagnostic precision. In parallel, advances in artificial intelligence (AI) along with the growing digitization of pathology slides for the primary diagnosis are a promising approach to meet the demand for more accurate detection, classification and prediction of behaviour of breast tumours. In this article, we cover the current and prospective uses of AI in digital pathology for breast cancer, review the basics of digital pathology and AI, and outline outstanding challenges in the field.
    Keywords:  (Artificial intelligence); (Deep learning); (Machine learning); (Whole slide image); AI; Applications; Breast cancer; Breast pathology; DL; Digital; ML; Pathology; WSI
    DOI:  https://doi.org/10.1016/j.breast.2019.12.007
  6. Gastrointest Endosc. 2020 Jan 10. pii: S0016-5107(20)30026-2. [Epub ahead of print]
       BACKGROUND AND AIMS: The visual detection of early esophageal neoplasia (high-grade dysplasia and T1 cancer) in Barrett's esophagus (BE) with white-light and virtual chromoendoscopy still remains challenging. The aim of this study was to assess whether a convolutional neural artificial intelligence network can aid in the recognition of early esophageal neoplasia in BE.
    METHODS: Nine hundred sixteen images from 65 patients were collected of histology-proven early esophageal neoplasia in BE containing high-grade dysplasia or T1 cancer. The area of neoplasia was masked using image annotation software. Nine hundred nineteen control images were collected of BE without high-grade dysplasia. A convolutional neural network (CNN) algorithm was pretrained on ImageNet and then fine-tuned with the goal to provide the correct binary classification of "dysplastic" or "nondysplastic." We developed an object detection algorithm that drew localization boxes around regions classified as dysplasia.
    RESULTS: The CNN analyzed 458 test images (225 dysplasia/233 nondysplasia) and correctly detected early neoplasia with sensitivity of 96.4%, specificity of 94.2% and accuracy of 95.4%. With regard to the object detection algorithm for all images in the validation set, the system was able to achieve a mean-average-precision (mAP) of 0.7533 at an intersection over union (IOU) of 0.3 CONCLUSION: In this pilot study, our AI model was able to detect early esophageal neoplasia in Barrett's esophagus images with high accuracy. In addition, the object detection algorithm was able to draw a localization box around the areas of dysplasia with high precision and at a speed that allows for real-time implementation.
    Keywords:  Barrett’s esophagus; artificial intelligence; dysplasia
    DOI:  https://doi.org/10.1016/j.gie.2019.12.049
  7. JAMA Netw Open. 2020 Jan 03. 3(1): e1919396
       Importance: Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia, and its early detection could lead to significant improvements in outcomes through the appropriate prescription of anticoagulation medication. Although a variety of methods exist for screening for AF, a targeted approach, which requires an efficient method for identifying patients at risk, would be preferred.
    Objective: To examine machine learning approaches applied to electronic health record data that have been harmonized to the Observational Medical Outcomes Partnership Common Data Model for identifying risk of AF.
    Design, Setting, and Participants: This diagnostic study used data from 2 252 219 individuals cared for in the UCHealth hospital system, which comprises 3 large hospitals in Colorado, from January 1, 2011, to October 1, 2018. Initial analysis was performed in December 2018; follow-up analysis was performed in July 2019.
    Exposures: All Observational Medical Outcomes Partnership Common Data Model-harmonized electronic health record features, including diagnoses, procedures, medications, age, and sex.
    Main Outcomes and Measures: Classification of incident AF in designated 6-month intervals, adjudicated retrospectively, based on area under the receiver operating characteristic curve and F1 statistic.
    Results: Of 2 252 219 individuals (1 225 533 [54.4%] women; mean [SD] age, 42.9 [22.3] years), 28 036 (1.2%) developed incident AF during a designated 6-month interval. The machine learning model that used the 200 most common electronic health record features, including age and sex, and random oversampling with a single-layer, fully connected neural network provided the optimal prediction of 6-month incident AF, with an area under the receiver operating characteristic curve of 0.800 and an F1 score of 0.110. This model performed only slightly better than a more basic logistic regression model composed of known clinical risk factors for AF, which had an area under the receiver operating characteristic curve of 0.794 and an F1 score of 0.079.
    Conclusions and Relevance: Machine learning approaches to electronic health record data offer a promising method for improving risk prediction for incident AF, but more work is needed to show improvement beyond standard risk factors.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2019.19396
  8. Lancet Oncol. 2020 Jan 08. pii: S1470-2045(19)30738-7. [Epub ahead of print]
       BACKGROUND: An increasing volume of prostate biopsies and a worldwide shortage of urological pathologists puts a strain on pathology departments. Additionally, the high intra-observer and inter-observer variability in grading can result in overtreatment and undertreatment of prostate cancer. To alleviate these problems, we aimed to develop an artificial intelligence (AI) system with clinically acceptable accuracy for prostate cancer detection, localisation, and Gleason grading.
    METHODS: We digitised 6682 slides from needle core biopsies from 976 randomly selected participants aged 50-69 in the Swedish prospective and population-based STHLM3 diagnostic study done between May 28, 2012, and Dec 30, 2014 (ISRCTN84445406), and another 271 from 93 men from outside the study. The resulting images were used to train deep neural networks for assessment of prostate biopsies. The networks were evaluated by predicting the presence, extent, and Gleason grade of malignant tissue for an independent test dataset comprising 1631 biopsies from 246 men from STHLM3 and an external validation dataset of 330 biopsies from 73 men. We also evaluated grading performance on 87 biopsies individually graded by 23 experienced urological pathologists from the International Society of Urological Pathology. We assessed discriminatory performance by receiver operating characteristics and tumour extent predictions by correlating predicted cancer length against measurements by the reporting pathologist. We quantified the concordance between grades assigned by the AI system and the expert urological pathologists using Cohen's kappa.
    FINDINGS: The AI achieved an area under the receiver operating characteristics curve of 0·997 (95% CI 0·994-0·999) for distinguishing between benign (n=910) and malignant (n=721) biopsy cores on the independent test dataset and 0·986 (0·972-0·996) on the external validation dataset (benign n=108, malignant n=222). The correlation between cancer length predicted by the AI and assigned by the reporting pathologist was 0·96 (95% CI 0·95-0·97) for the independent test dataset and 0·87 (0·84-0·90) for the external validation dataset. For assigning Gleason grades, the AI achieved a mean pairwise kappa of 0·62, which was within the range of the corresponding values for the expert pathologists (0·60-0·73).
    INTERPRETATION: An AI system can be trained to detect and grade cancer in prostate needle biopsy samples at a ranking comparable to that of international experts in prostate pathology. Clinical application could reduce pathology workload by reducing the assessment of benign biopsies and by automating the task of measuring cancer length in positive biopsy cores. An AI system with expert-level grading performance might contribute a second opinion, aid in standardising grading, and provide pathology expertise in parts of the world where it does not exist.
    FUNDING: Swedish Research Council, Swedish Cancer Society, Swedish eScience Research Center, EIT Health.
    DOI:  https://doi.org/10.1016/S1470-2045(19)30738-7
  9. Health Inf Sci Syst. 2020 Dec;8(1): 8
       Purpose: An algorithm for diagnostic system with neural network is developed for diagnosis of dental caries in digital radiographs. The diagnostic performance of the designed system is evaluated.
    Methods: The diagnostic system comprises of Laplacian filtering, window based adaptive threshold, morphological operations, statistical feature extraction and back-propagation neural network. The back propagation neural network used to classify a tooth surface as normal or having dental caries. The 105 images derived from intra-oral digital radiography, are used to train an artificial neural network with 10-fold cross validation. The caries in these dental radiographs are annotated by a dentist. The performance of the diagnostic algorithm is evaluated and compared with baseline methods.
    Results: The system gives an accuracy of 97.1%, false positive (FP) rate of 2.8%, receiver operating characteristic (ROC) area of 0.987 and precision recall curve (PRC) area of 0.987 with learning rate of 0.4, momentum of 0.2 and 500 iterations with single hidden layer with 9 nodes.
    Conclusions: This study suggests that dental caries can be predicted more accurately with back-propagation neural network. There is a need for improving the system for classification of caries depth. More improved algorithms and high quantity and high quality datasets may give still better tooth decay detection in clinical dental practice.
    Keywords:  Back propagation neural network; Computer assisted diagnosis; Dental caries; Machine learning
    DOI:  https://doi.org/10.1007/s13755-019-0096-y
  10. Pharmaceut Med. 2019 Apr;33(2): 109-120
       INTRODUCTION: Pharmacovigilance (PV) detects, assesses, and prevents adverse events (AEs) and other drug-related problems by collecting, evaluating, and acting upon AEs. The volume of individual case safety reports (ICSRs) increases yearly, but it is estimated that more than 90% of AEs go unreported. In this landscape, embracing assistive technologies at scale becomes necessary to obtain a higher yield of AEs, to maintain compliance, and transform the PV professional work life.
    AIM: The aim of this study was to identify areas across the PV value chain that can be augmented by cognitive service solutions using the methodologies of contextual analysis and cognitive load theory. It will also provide a framework of how to validate these PV cognitive services leveraging the acceptable quality limit approach.
    METHODS: The data used to train the cognitive service were an annotated corpus consisting of 20,000 ICSRS from which we developed a framework to identify and validate 40 cognitive services ranging from information extraction to complex decision making. This framework addresses the following shortcomings: (1) needing subject-matter expertise (SME) to match the artificial intelligence (AI) model predictions to the gold standard, commonly referred to as 'ground truth' in the AI space, (2) ground truth inconsistencies, (3) automated validation of prediction missing context, and (4) auto-labeling causing inaccurate test accuracy. The method consists of (1) conducting contextual analysis, (2) assessing human cognitive workload, (3) determining decision points for applying artificial intelligence (AI), (4) defining the scope of the data, or annotated corpus required for training and validation of the cognitive services, (5) identifying and standardizing PV knowledge elements, (6) developing cognitive services, and (7) reviewing and validating cognitive services.
    RESULTS: By applying the framework, we (1) identified 51 decision points as candidates for AI use, (2) standardized the process to make PV knowledge explicit, (3) embedded SMEs in the process to preserve PV knowledge and context, (4) standardized acceptability by using established quality inspection principles, and (5) validated a total of 126 cognitive services.
    CONCLUSION: The value of using AI methodologies in PV is compelling; however, as PV is highly regulated, acceptability will require assurances of quality, consistency, and standardization. We are proposing a foundational framework that the industry can use to identify and validate services to better support the gathering of quality data and to better serve the PV professional.
    DOI:  https://doi.org/10.1007/s40290-019-00269-0
  11. Diabetes Metab Res Rev. 2020 Jan 14. e3252
       AIMS: Identification, a priori, of those at high risk of progression from pre-diabetes to diabetes may enable targeted delivery of interventional programmes while avoiding the burden of prevention and treatment in those at low risk. We studied whether the use of a machine-learning model can improve the prediction of incident diabetes utilizing patient data from electronic medical records.
    METHODS: A machine-learning model predicting the progression from pre-diabetes to diabetes was developed using a gradient boosted trees model. The model was trained on data from The Health Improvement Network (THIN) database cohort, internally validated on THIN data not used for training, and externally validated on the Canadian AppleTree and the Israeli Maccabi Health Services (MHS) data sets. The model's predictive ability was compared with that of a logistic-regression model within each data set.
    RESULTS: A cohort of 852 454 individuals with pre-diabetes (glucose ≥ 100 mg/dL and/or HbA1c ≥ 5.7) was used for model training including 4.9 million time points using 900 features. The full model was eventually implemented using 69 variables, generated from 11 basic signals. The machine-learning model demonstrated superiority over the logistic-regression model, which was maintained at all sensitivity levels - comparing AUC [95% CI] between the models; in the THIN data set (0.865 [0.860,0.869] vs 0.778 [0.773,0.784] P < .05), the AppleTree data set (0.907 [0.896, 0.919] vs 0.880 [0.867, 0.894] P < .05) and the MHS data set (0.925 [0.923, 0.927] vs 0.876 [0.872, 0.879] P < .05).
    CONCLUSIONS: Machine-learning models preserve their performance across populations in diabetes prediction, and can be integrated into large clinical systems, leading to judicious selection of persons for interventional programmes.
    Keywords:  electronic medical records; machine learning; pre-diabetes
    DOI:  https://doi.org/10.1002/dmrr.3252
  12. Metab Syndr Relat Disord. 2020 Jan 13.
      Aim: The primary objective of our research was to compare the performance of data analysis to predict vitamin D deficiency using three different regression approaches and to evaluate the usefulness of incorporating machine learning algorithms into the data analysis in a clinical setting. Methods: We included 221 patients from our hypertension unit, whose data were collected from electronic records dated between 2006 and 2017. We used classical stepwise logistic regression, and two machine learning methods [least absolute shrinkage and selection operator (LASSO) and elastic net]. We assessed the performance of these three algorithms in terms of sensitivity, specificity, misclassification error, and area under the curve (AUC). Results: LASSO and elastic net regression performed better than logistic regression in terms of AUC, which was significantly better in both penalized methods, with AUC = 0.76 and AUC = 0.74 for elastic net and LASSO, respectively, than in logistic regression, with AUC = 0.64. In terms of misclassification rate, elastic net (18%) outperformed LASSO (22%) and logistic regression (25%). Conclusion: Compared with a classical logistic regression approach, penalized methods were found to have better performance in predicting vitamin D deficiency. The use of machine learning algorithms such as LASSO and elastic net may significantly improve the prediction of vitamin D deficiency in a hypertensive obese population.
    Keywords:  metabolic syndrome; obesity; penalized regression; vitamin D
    DOI:  https://doi.org/10.1089/met.2019.0104
  13. Atherosclerosis. 2019 Dec 23. pii: S0021-9150(19)31607-7. [Epub ahead of print]294 25-32
       BACKGROUND AND AIMS: Artificial intelligence (AI) is increasing its role in diagnosis of patients with suspicious coronary artery disease. The aim of this manuscript is to develop a deep convolutional neural network (CNN) to classify coronary computed tomography angiography (CCTA) in the correct Coronary Artery Disease Reporting and Data System (CAD-RADS) category.
    METHODS: Two hundred eighty eight patients who underwent clinically indicated CCTA were included in this single-center retrospective study. The CCTAs were stratified by CAD-RADS scores by expert readers and considered as reference standard. A deep CNN was designed and tested on the CCTA dataset and compared to on-site reading. The deep CNN analyzed the diagnostic accuracy of the following three Models based on CAD-RADS classification: Model A (CAD-RADS 0 vs CAD-RADS 1-2 vs CAD-RADS 3,4,5), Model 1 (CAD-RADS 0 vs CAD-RADS>0), Model 2 (CAD-RADS 0-2 vs CAD-RADS 3-5). Time of analysis for both physicians and CNN were recorded.
    RESULTS: Model A showed a sensitivity, specificity, negative predictive value, positive predictive value and accuracy of 47%, 74%, 77%, 46% and 60%, respectively. Model 1 showed a sensitivity, specificity, negative predictive value, positive predictive value and accuracy of 66%, 91%, 92%, 63%, 86%, respectively. Conversely, Model 2 demonstrated the following sensitivity, specificity, negative predictive value, positive predictive value and accuracy: 82%, 58%, 74%, 69%, 71%, respectively. Time of analysis was significantly lower using CNN as compared to on-site reading (530.5 ± 179.1 vs 104.3 ± 1.4 sec, p=0.01) CONCLUSIONS: Deep CNN yielded accurate automated classification of patients with CAD-RADS.
    Keywords:  Artificial intelligence; CADRADS; Convolutional neural network; Coronary artery disease; Plaque characterization
    DOI:  https://doi.org/10.1016/j.atherosclerosis.2019.12.001
  14. Nat Commun. 2020 Jan 17. 11(1): 363
      Infections have become the major cause of morbidity and mortality among patients with chronic lymphocytic leukemia (CLL) due to immune dysfunction and cytotoxic CLL treatment. Yet, predictive models for infection are missing. In this work, we develop the CLL Treatment-Infection Model (CLL-TIM) that identifies patients at risk of infection or CLL treatment within 2 years of diagnosis as validated on both internal and external cohorts. CLL-TIM is an ensemble algorithm composed of 28 machine learning algorithms based on data from 4,149 patients with CLL. The model is capable of dealing with heterogeneous data, including the high rates of missing data to be expected in the real-world setting, with a precision of 72% and a recall of 75%. To address concerns regarding the use of complex machine learning algorithms in the clinic, for each patient with CLL, CLL-TIM provides explainable predictions through uncertainty estimates and personalized risk factors.
    DOI:  https://doi.org/10.1038/s41467-019-14225-8
  15. Pharmaceut Med. 2019 Jun;33(3): 209-217
       INTRODUCTION: Outcomes in type 2 diabetes mellitus (T2DM) could be optimized by identifying which treatments are likely to produce the greatest improvements in glycemic control for each patient.
    OBJECTIVES: We aimed to identify patient characteristics associated with achieving and maintaining a target glycated hemoglobin (HbA1c) of ≤ 7% using machine learning methodology to analyze clinical trial data on combination therapy for T2DM. By applying a new machine learning methodology to an existing clinical dataset, the practical application of this approach was evaluated and the potential utility of this new approach to clinical decision making was assessed.
    METHODS: Data were pooled from two phase III, randomized, double-blind, parallel-group studies of empagliflozin/linagliptin single-pill combination therapy versus each monotherapy in patients who were treatment-naïve or receiving background metformin. Descriptive analysis was used to assess univariate associations between HbA1c target categories and each baseline characteristic. After the descriptive analysis results, a machine learning analysis was performed (classification tree and random forest methods) to estimate and predict target categories based on patient characteristics at baseline, without a priori selection.
    RESULTS: In the descriptive analysis, lower mean baseline HbA1c and fasting plasma glucose (FPG) were both associated with achieving and maintaining the HbA1c target. The machine learning analysis also identified HbA1c and FPG as the strongest predictors of attaining glycemic control. In contrast, covariates including body weight, waist circumference, blood pressure, or other variables did not contribute to the outcome.
    CONCLUSIONS: Using both traditional and novel data analysis methodologies, this study identified baseline glycemic status as the strongest predictor of target glycemic control attainment. Machine learning algorithms provide an hypothesis-free, unbiased methodology, which can greatly enhance the search for predictors of therapeutic success in T2DM. The approach used in the present analysis provides an example of how a machine learning algorithm can be applied to a clinical dataset and used to develop predictions that can facilitate clinical decision making.
    DOI:  https://doi.org/10.1007/s40290-019-00281-4
  16. PLoS One. 2020 ;15(1): e0227419
      Intracerebral hemorrhage in preterm infants is a major cause of brain damage and cerebral palsy. The pathogenesis of cerebral hemorrhage is multifactorial. Among the risk factors are impaired cerebral autoregulation, infections, and coagulation disorders. Machine learning methods allow the identification of combinations of clinical factors to best differentiate preterm infants with intra-cerebral bleeding and the development of models for patients at risk of cerebral hemorrhage. In the current study, a Random Forest approach is applied to develop such models for extremely and very preterm infants (23-30 weeks gestation) based on data collected from a cohort of 229 individuals. The constructed models exhibit good prediction accuracy and might be used in clinical practice to reduce the risk of cerebral bleeding in prematurity.
    DOI:  https://doi.org/10.1371/journal.pone.0227419
  17. Am J Manag Care. 2020 Jan;26(1): 26-31
       OBJECTIVES: To determine if it is possible to risk-stratify avoidable utilization without clinical data and with limited patient-level data.
    STUDY DESIGN: The aim of this study was to demonstrate the influences of socioeconomic determinants of health (SDH) with regard to avoidable patient-level healthcare utilization. The study investigated the ability of machine learning models to predict risk using only publicly available and purchasable SDH data. A total of 138,115 patients were analyzed from a deidentified database representing 3 health systems in the United States.
    METHODS: A hold-out methodology was used to ensure that the model's performance could be tested on a completely independent set of subjects. A proprietary decision tree methodology was used to make the predictions. Only the socioeconomic features-age group, gender, and race-were used in the prediction of a patient's risk of admission.
    RESULTS: The decision tree-based machine learning approach analyzed in this study was able to predict inpatient and emergency department utilization with a high degree of discrimination using only purchasable and publicly available data on SDH.
    CONCLUSIONS: This study indicates that it is possible to risk-stratify patients' risk of utilization without interacting with the patient or collecting information beyond the patient's age, gender, race, and address. The implications of this application are wide and have the potential to positively affect health systems by facilitating targeted patient outreach with specific, individualized interventions to tackle detrimental SDH at not only the individual level but also the neighborhood level.
  18. Am J Manag Care. 2020 Jan;26(1): 40-44
       OBJECTIVES: The Veterans Affairs (VA) Health Care System is among the largest integrated health systems in the United States. Many VA enrollees are dual users of Medicare, and little research has examined methods to most accurately predict which veterans will be mostly reliant on VA services in the future. This study examined whether machine learning methods can better predict future reliance on VA primary care compared with traditional statistical methods.
    STUDY DESIGN: Observational study of 83,143 VA patients dually enrolled in fee-for-service Medicare using VA and Medicare administrative databases and the 2012 Survey of Healthcare Experiences of Patients.
    METHODS: The primary outcome was a dichotomous measure denoting whether patients obtained more than 50% of all primary care visits (VA + Medicare) from VA. We compared the performance of 6 candidate models-logistic regression, elastic net regression, decision trees, random forest, gradient boosting machine, and neural network-in predicting 2013 reliance as a function of 61 patient characteristics observed in 2012. We measured performance using the cross-validated area under the receiver operating characteristic (AUROC) metric.
    RESULTS: Overall, 72.9% and 74.5% of veterans were mostly VA reliant in 2012 and 2013, respectively. All models had similar average AUROCs, ranging from 0.873 to 0.892. The best-performing model used gradient boosting machine, which exhibited modestly higher AUROC and similar variance compared with standard logistic regression.
    CONCLUSIONS: The modest gains in performance from the best-performing model, gradient boosting machine, are unlikely to outweigh inherent drawbacks, including computational complexity and limited interpretability compared with traditional logistic regression.
  19. NPJ Digit Med. 2020 ;3 3
      Atrial fibrillation (AF) is a cardiac rhythm disorder associated with increased morbidity and mortality. It is the leading risk factor for cardioembolic stroke and its early detection is crucial in both primary and secondary stroke prevention. Continuous monitoring of cardiac rhythm is today possible thanks to consumer-grade wearable devices, enabling transformative diagnostic and patient management tools. Such monitoring is possible using low-cost easy-to-implement optical sensors that today equip the majority of wearables. These sensors record blood volume variations-a technology known as photoplethysmography (PPG)-from which the heart rate and other physiological parameters can be extracted to inform about user activity, fitness, sleep, and health. Recently, new wearable devices were introduced as being capable of AF detection, evidenced by large prospective trials in some cases. Such devices would allow for early screening of AF and initiation of therapy to prevent stroke. This review is a summary of a body of work on AF detection using PPG. A thorough account of the signal processing, machine learning, and deep learning approaches used in these studies is presented, followed by a discussion of their limitations and challenges towards clinical applications.
    Keywords:  Diagnosis; Risk factors
    DOI:  https://doi.org/10.1038/s41746-019-0207-9
  20. Drug Discov Today. 2020 Jan 08. pii: S1359-6446(20)30005-2. [Epub ahead of print]
      A significant number of drugs fail during the clinical testing stage. To understand the attrition of drugs through the regulatory process, here we review and advance machine-learning (ML) and natural language-processing algorithms to investigate the importance of factors in clinical trials that are linked with failure in Phases II and III. We find that clinical trial phase transitions can be predicted with an average accuracy of 80%. Identifying these trials provides information to sponsors facing difficult decisions about whether these higher risk trials should be modified or halted. We also find common protocol characteristics across therapeutic areas that are linked to phase success, including the number of endpoints and the complexity of the eligibility criteria.
    DOI:  https://doi.org/10.1016/j.drudis.2019.12.014
  21. Breast. 2019 Nov 26. pii: S0960-9776(19)31103-8. [Epub ahead of print]49 194-200
      Artificial intelligence demonstrated its value for automated contouring of organs at risk and target volumes as well as for auto-planning of radiation dose distributions in terms of saving time, increasing consistency, and improving dose-volumes parameters. Future developments include incorporating dose/outcome data to optimise dose distributions with optimal coverage of the high-risk areas, while at the same time limiting doses to low-risk areas. An infinite gradient of volumes and doses to deliver spatially-adjusted radiation can be generated, allowing to avoid unnecessary radiation to organs at risk. Therefore, data about patient-, tumour-, and treatment-related factors have to be combined with dose distributions and outcome-containing databases.
    Keywords:  Artificial intelligence; Auto-segmentation; Breast cancer; Deep learning; Neural network; Radiation therapy
    DOI:  https://doi.org/10.1016/j.breast.2019.11.011
  22. Thorac Cancer. 2020 Jan 16.
       BACKGROUND: The aim of the study was to develop a deep learning (DL) algorithm to evaluate the pathological complete response (pCR) to neoadjuvant chemotherapy in breast cancer.
    METHODS: A total of 302 breast cancer patients in this retrospective study were randomly divided into a training set (n = 244) and a validation set (n = 58). Tumor regions were manually delineated on each slice by two expert radiologists on enhanced T1-weighted images. Pathological results were used as ground truth. Deep learning network contained five repetitions of convolution and max-pooling layers and ended with three dense layers. The pre-NAC model and post-NAC model inputted six phases of pre-NAC and post-NAC images, respectively. The combined model used 12 channels from six phases of pre-NAC and six phases of post-NAC images. All models above included three indexes of molecular type as one additional input channel.
    RESULTS: The training set contained 137 non-pCR and 107 pCR participants. The validation set contained 33 non-pCR and 25 pCR participants. The area under the receiver operating characteristic (ROC) curve (AUC) of three models was 0.553 for pre-NAC, 0.968 for post-NAC and 0.970 for the combined data, respectively. A significant difference was found in AUC between using pre-NAC data alone and combined data (P < 0.001). The positive predictive value of the combined model was greater than that of the post-NAC model (100% vs. 82.8%, P = 0.033).
    CONCLUSION: This study established a deep learning model to predict PCR status after neoadjuvant therapy by combining pre-NAC and post-NAC MRI data. The model performed better than using pre-NAC data only, and also performed better than using post-NAC data only.
    KEY POINTS: Significant findings of the study. It achieved an AUC of 0.968 for pCR prediction. It showed a significantly greater AUC than using pre-NAC data only. What this study adds This study established a deep learning model to predict PCR status after neoadjuvant therapy by combining pre-NAC and post-NAC MRI data.
    Keywords:  Breast cancer; DCE-MRI; deep learning; pathologic complete response
    DOI:  https://doi.org/10.1111/1759-7714.13309