bims-arihec Biomed News
on Artificial Intelligence in Healthcare
Issue of 2019‒11‒10
nineteen papers selected by
Céline Bélanger
Cogniges Inc.


  1. J Med Internet Res. 2019 Nov 08. 21(11): e16607
      Data-driven science and its corollaries in machine learning and the wider field of artificial intelligence have the potential to drive important changes in medicine. However, medicine is not a science like any other: It is deeply and tightly bound with a large and wide network of legal, ethical, regulatory, economical, and societal dependencies. As a consequence, the scientific and technological progresses in handling information and its further processing and cross-linking for decision support and predictive systems must be accompanied by parallel changes in the global environment, with numerous stakeholders, including citizen and society. What can be seen at the first glance as a barrier and a mechanism slowing down the progression of data science must, however, be considered an important asset. Only global adoption can transform the potential of big data and artificial intelligence into an effective breakthroughs in handling health and medicine. This requires science and society, scientists and citizens, to progress together.
    Keywords:  artificial intelligence; big data; medical informatics
    DOI:  https://doi.org/10.2196/16607
  2. J Am Med Inform Assoc. 2019 Nov 04. pii: ocz192. [Epub ahead of print]
      As the efficacy of artificial intelligence (AI) in improving aspects of healthcare delivery is increasingly becoming evident, it becomes likely that AI will be incorporated in routine clinical care in the near future. This promise has led to growing focus and investment in AI medical applications both from governmental organizations and technological companies. However, concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. These concerns include the possibility of biases, lack of transparency with certain AI algorithms, privacy concerns with the data used for training AI models, and safety and liability issues with AI application in clinical environments. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue or recommendations as to how to practically address these concerns in health care. In this article, we propose a governance model that aims to not only address the ethical and regulatory issues that arise out of the application of AI in health care, but also stimulate further discussion about governance of AI in health care.
    Keywords:  artificial intelligence; ethics; governance framework; healthcare; regulation
    DOI:  https://doi.org/10.1093/jamia/ocz192
  3. Curr Psychiatry Rep. 2019 Nov 07. 21(11): 116
      PURPOSE OF REVIEW: Artificial intelligence (AI) technology holds both great promise to transform mental healthcare and potential pitfalls. This article provides an overview of AI and current applications in healthcare, a review of recent original research on AI specific to mental health, and a discussion of how AI can supplement clinical practice while considering its current limitations, areas needing additional research, and ethical implications regarding AI technology.RECENT FINDINGS: We reviewed 28 studies of AI and mental health that used electronic health records (EHRs), mood rating scales, brain imaging data, novel monitoring systems (e.g., smartphone, video), and social media platforms to predict, classify, or subgroup mental health illnesses including depression, schizophrenia or other psychiatric illnesses, and suicide ideation and attempts. Collectively, these studies revealed high accuracies and provided excellent examples of AI's potential in mental healthcare, but most should be considered early proof-of-concept works demonstrating the potential of using machine learning (ML) algorithms to address mental health questions, and which types of algorithms yield the best performance. As AI techniques continue to be refined and improved, it will be possible to help mental health practitioners re-define mental illnesses more objectively than currently done in the DSM-5, identify these illnesses at an earlier or prodromal stage when interventions may be more effective, and personalize treatments based on an individual's unique characteristics. However, caution is necessary in order to avoid over-interpreting preliminary results, and more work is required to bridge the gap between AI in mental health research and clinical care.
    Keywords:  Bioethics; Deep learning; Depression; Machine learning; Natural language processing; Research ethics; Schizophrenia; Suicide; Technology
    DOI:  https://doi.org/10.1007/s11920-019-1094-0
  4. Front Psychiatry. 2019 ;10 746
      Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn't safe it should not be used, and if it isn't trusted, it won't be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.
    Keywords:  artificial intelligence; chatbot; conversational AI; digital assistant; expert systems; human–computer interaction; natural language processing; psychotherapy
    DOI:  https://doi.org/10.3389/fpsyt.2019.00746
  5. Breast. 2019 Oct 11. pii: S0960-9776(19)30564-8. [Epub ahead of print]49 25-32
      Breast cancer care is a leading area for development of artificial intelligence (AI), with applications including screening and diagnosis, risk calculation, prognostication and clinical decision-support, management planning, and precision medicine. We review the ethical, legal and social implications of these developments. We consider the values encoded in algorithms, the need to evaluate outcomes, and issues of bias and transferability, data ownership, confidentiality and consent, and legal, moral and professional responsibility. We consider potential effects for patients, including on trust in healthcare, and provide some social science explanations for the apparent rush to implement AI solutions. We conclude by anticipating future directions for AI in breast cancer care. Stakeholders in healthcare AI should acknowledge that their enterprise is an ethical, legal and social challenge, not just a technical challenge. Taking these challenges seriously will require broad engagement, imposition of conditions on implementation, and pre-emptive systems of oversight to ensure that development does not run ahead of evaluation and deliberation. Once artificial intelligence becomes institutionalised, it may be difficult to reverse: a proactive role for government, regulators and professional groups will help ensure introduction in robust research contexts, and the development of a sound evidence base regarding real-world effectiveness. Detailed public discussion is required to consider what kind of AI is acceptable rather than simply accepting what is offered, thus optimising outcomes for health systems, professionals, society and those receiving care.
    Keywords:  AI (Artificial Intelligence); Breast carcinoma; Ethical Issues; Social values; Technology Assessment, Biomedical
    DOI:  https://doi.org/10.1016/j.breast.2019.10.001
  6. Eur Radiol. 2019 Nov 05.
      OBJECTIVE: To evaluate the potential value of the machine learning (ML)-based MRI texture analysis for predicting 1p/19q codeletion status of lower-grade gliomas (LGG), using various state-of-the-art ML algorithms.MATERIALS AND METHODS: For this retrospective study, 107 patients with LGG were included from a public database. Texture features were extracted from conventional T2-weighted and contrast-enhanced T1-weighted MRI images, using LIFEx software. Training and unseen validation splits were created using stratified 10-fold cross-validation technique along with minority over-sampling. Dimension reduction was done using collinearity analysis and feature selection (ReliefF). Classifications were done using adaptive boosting, k-nearest neighbours, naive Bayes, neural network, random forest, stochastic gradient descent, and support vector machine. Friedman test and pairwise post hoc analyses were used for comparison of classification performances based on the area under the curve (AUC).
    RESULTS: Overall, the predictive performance of the ML algorithms were statistically significantly different, χ2(6) = 26.7, p < 0.001. There was no statistically significant difference among the performance of the neural network, naive Bayes, support vector machine, random forest, and stochastic gradient descent, adjusted p > 0.05. The mean AUC and accuracy values of these five algorithms ranged from 0.769 to 0.869 and from 80.1 to 84%, respectively. The neural network had the highest mean rank with mean AUC and accuracy values of 0.869 and 83.8%, respectively.
    CONCLUSIONS: The ML-based MRI texture analysis might be a promising non-invasive technique for predicting the 1p/19q codeletion status of LGGs. Using this technique along with various ML algorithms, more than four-fifths of the LGGs can be correctly classified.
    KEY POINTS: • More than four-fifths of the lower-grade gliomas can be correctly classified with machine learning-based MRI texture analysis. Satisfying classification outcomes are not limited to a single algorithm. • A few-slice-based volumetric segmentation technique would be a valid approach, providing satisfactory predictive textural information and avoiding excessive segmentation duration in clinical practice. • Feature selection is sensitive to different patient data set samples so that each sampling leads to the selection of different feature subsets, which needs to be considered in future works.
    Keywords:  Artificial intelligence; Glioma; Machine learning; Mutation; Radiomics
    DOI:  https://doi.org/10.1007/s00330-019-06492-2
  7. World J Urol. 2019 Nov 05.
      PURPOSE: The purpose of the study was to provide a comprehensive review of recent machine learning (ML) and deep learning (DL) applications in urological practice. Numerous studies have reported their use in the medical care of various urological disorders; however, no critical analysis has been made to date.METHODS: A detailed search of original articles was performed using the PubMed MEDLINE database to identify recent English literature relevant to ML and DL applications in the fields of urolithiasis, renal cell carcinoma (RCC), bladder cancer (BCa), and prostate cancer (PCa).
    RESULTS: In total, 43 articles were included addressing these four subfields. The most common ML and DL application in urolithiasis is in the prediction of endourologic surgical outcomes. The main area of research involving ML and DL in RCC concerns the differentiation between benign and malignant small renal masses, Fuhrman nuclear grade prediction, and gene expression-based molecular signatures. BCa studies employ radiomics and texture feature analysis for the distinction between low- and high-grade tumors, address accurate image-based cytology, and use algorithms to predict treatment response, tumor recurrence, and patient survival. PCa studies aim at developing algorithms for Gleason score prediction, MRI computer-aided diagnosis, and surgical outcomes and biochemical recurrence prediction. Studies consistently found the superiority of these methods over traditional statistical methods.
    CONCLUSIONS: The continuous incorporation of clinical data, further ML and DL algorithm retraining, and generalizability of models will augment the prediction accuracy and enhance individualized medicine.
    Keywords:  Artificial intelligence; Artificial neural network; Bladder cancer; Convolutional neural network; Deep learning; Machine learning; Prostate cancer; Renal cell carcinoma; Urolithiasis
    DOI:  https://doi.org/10.1007/s00345-019-03000-5
  8. JAMA Netw Open. 2019 Nov 01. 2(11): e1914645
      Importance: Deep learning-based methods, such as the sliding window approach for cropped-image classification and heuristic aggregation for whole-slide inference, for analyzing histological patterns in high-resolution microscopy images have shown promising results. These approaches, however, require a laborious annotation process and are fragmented.Objective: To evaluate a novel deep learning method that uses tissue-level annotations for high-resolution histological image analysis for Barrett esophagus (BE) and esophageal adenocarcinoma detection.
    Design, Setting, and Participants: This diagnostic study collected deidentified high-resolution histological images (N = 379) for training a new model composed of a convolutional neural network and a grid-based attention network. Histological images of patients who underwent endoscopic esophagus and gastroesophageal junction mucosal biopsy between January 1, 2016, and December 31, 2018, at Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire) were collected.
    Main Outcomes and Measures: The model was evaluated on an independent testing set of 123 histological images with 4 classes: normal, BE-no-dysplasia, BE-with-dysplasia, and adenocarcinoma. Performance of this model was measured and compared with that of the current state-of-the-art sliding window approach using the following standard machine learning metrics: accuracy, recall, precision, and F1 score.
    Results: Of the independent testing set of 123 histological images, 30 (24.4%) were in the BE-no-dysplasia class, 14 (11.4%) in the BE-with-dysplasia class, 21 (17.1%) in the adenocarcinoma class, and 58 (47.2%) in the normal class. Classification accuracies of the proposed model were 0.85 (95% CI, 0.81-0.90) for the BE-no-dysplasia class, 0.89 (95% CI, 0.84-0.92) for the BE-with-dysplasia class, and 0.88 (95% CI, 0.84-0.92) for the adenocarcinoma class. The proposed model achieved a mean accuracy of 0.83 (95% CI, 0.80-0.86) and marginally outperformed the sliding window approach on the same testing set. The F1 scores of the attention-based model were at least 8% higher for each class compared with the sliding window approach: 0.68 (95% CI, 0.61-0.75) vs 0.61 (95% CI, 0.53-0.68) for the normal class, 0.72 (95% CI, 0.63-0.80) vs 0.58 (95% CI, 0.45-0.69) for the BE-no-dysplasia class, 0.30 (95% CI, 0.11-0.48) vs 0.22 (95% CI, 0.11-0.33) for the BE-with-dysplasia class, and 0.67 (95% CI, 0.54-0.77) vs 0.58 (95% CI, 0.44-0.70) for the adenocarcinoma class. However, this outperformance was not statistically significant.
    Conclusions and Relevance: Results of this study suggest that the proposed attention-based deep neural network framework for BE and esophageal adenocarcinoma detection is important because it is based solely on tissue-level annotations, unlike existing methods that are based on regions of interest. This new model is expected to open avenues for applying deep learning to digital pathology.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2019.14645
  9. Graefes Arch Clin Exp Ophthalmol. 2019 Nov 04.
      PURPOSE: To investigate the feasibility of training an artificial intelligence (AI) on a public-available AI platform to diagnose polypoidal choroidal vasculopathy (PCV) using indocyanine green angiography (ICGA).METHODS: Two methods using AI models were trained by a data set including 430 ICGA images of normal, neovascular age-related macular degeneration (nvAMD), and PCV eyes on a public-available AI platform. The one-step method distinguished normal, nvAMD, and PCV images simultaneously. The two-step method identifies normal and abnormal ICGA images at the first step and diagnoses PCV from the abnormal ICGA images at the second step. The method with higher performance was used to compare with retinal specialists and ophthalmologic residents on the performance of diagnosing PCV.
    RESULTS: The two-step method had better performance, in which the precision was 0.911 and the recall was 0.911 at the first step, and the precision was 0.783, and the recall was 0.783 at the second step. For the test data set, the two-step method distinguished normal and abnormal images with an accuracy of 1 and diagnosed PCV with an accuracy of 0.83, which was comparable to retinal specialists and superior to ophthalmologic residents.
    CONCLUSION: In this evaluation of ICGA images from normal, nvAMD, and PCV eyes, the models trained on a public-available AI platform had comparable performance to retinal specialists for diagnosing PCV. The utility of public-available AI platform might help everyone including ophthalmologists who had no AI-related resources, especially those in less developed areas, for future studies.
    Keywords:  Artificial intelligence; Deep learning; Diagnosis; Indocyanine green angiography; Machine learning; Polypoidal choroidal vasculopathy
    DOI:  https://doi.org/10.1007/s00417-019-04493-x
  10. Front Pediatr. 2019 ;7 413
      Background: Early detection of pediatric severe sepsis is necessary in order to optimize effective treatment, and new methods are needed to facilitate this early detection. Objective: Can a machine-learning based prediction algorithm using electronic healthcare record (EHR) data predict severe sepsis onset in pediatric populations? Methods: EHR data were collected from a retrospective set of de-identified pediatric inpatient and emergency encounters for patients between 2-17 years of age, drawn from the University of California San Francisco (UCSF) Medical Center, with encounter dates between June 2011 and March 2016. Results: Pediatric patients (n = 9,486) were identified and 101 (1.06%) were labeled with severe sepsis following the pediatric severe sepsis definition of Goldstein et al. (1). In 4-fold cross-validation evaluations, the machine learning algorithm achieved an AUROC of 0.916 for discrimination between severe sepsis and control pediatric patients at the time of onset and AUROC of 0.718 at 4 h before onset. The prediction algorithm significantly outperformed the Pediatric Logistic Organ Dysfunction score (PELOD-2) (p < 0.05) and pediatric Systemic Inflammatory Response Syndrome (SIRS) (p < 0.05) in the prediction of severe sepsis 4 h before onset using cross-validation and pairwise t-tests. Conclusion: This machine learning algorithm has the potential to deliver high-performance severe sepsis detection and prediction through automated monitoring of EHR data for pediatric inpatients, which may enable earlier sepsis recognition and treatment initiation.
    Keywords:  early detection; electronic health records; machine learning; pediatric severe sepsis; prediction
    DOI:  https://doi.org/10.3389/fped.2019.00413
  11. Clin Rheumatol. 2019 Nov 06.
      Clinical evaluation of rheumatic and musculoskeletal diseases through images is a challenge for the beginner rheumatologist since image diagnosis is an expert task with a long learning curve. The aim of this work was to present a narrative review on the main ultrasound computer-aided diagnosis systems that may help clinicians thanks to the progress made in the application of artificial intelligence techniques. We performed a literature review searching for original articles in seven repositories, from 1970 to 2019, and identified 11 main methods currently used in ultrasound computer-aided diagnosis systems. Also, we found that rheumatoid arthritis, osteoarthritis, systemic lupus erythematosus, and idiopathic inflammatory myopathies are the four musculoskeletal and rheumatic diseases most studied that use these innovative systems, with an overall accuracy of > 75%.
    Keywords:  Artificial intelligence; Computer-assisted diagnosis; Expert systems; Machine learning; Rheumatology
    DOI:  https://doi.org/10.1007/s10067-019-04791-z
  12. BMC Med Inform Decis Mak. 2019 Nov 06. 19(1): 211
      BACKGROUND: Diabetes and cardiovascular disease are two of the main causes of death in the United States. Identifying and predicting these diseases in patients is the first step towards stopping their progression. We evaluate the capabilities of machine learning models in detecting at-risk patients using survey data (and laboratory results), and identify key variables within the data contributing to these diseases among the patients.METHODS: Our research explores data-driven approaches which utilize supervised machine learning models to identify patients with such diseases. Using the National Health and Nutrition Examination Survey (NHANES) dataset, we conduct an exhaustive search of all available feature variables within the data to develop models for cardiovascular, prediabetes, and diabetes detection. Using different time-frames and feature sets for the data (based on laboratory data), multiple machine learning models (logistic regression, support vector machines, random forest, and gradient boosting) were evaluated on their classification performance. The models were then combined to develop a weighted ensemble model, capable of leveraging the performance of the disparate models to improve detection accuracy. Information gain of tree-based models was used to identify the key variables within the patient data that contributed to the detection of at-risk patients in each of the diseases classes by the data-learned models.
    RESULTS: The developed ensemble model for cardiovascular disease (based on 131 variables) achieved an Area Under - Receiver Operating Characteristics (AU-ROC) score of 83.1% using no laboratory results, and 83.9% accuracy with laboratory results. In diabetes classification (based on 123 variables), eXtreme Gradient Boost (XGBoost) model achieved an AU-ROC score of 86.2% (without laboratory data) and 95.7% (with laboratory data). For pre-diabetic patients, the ensemble model had the top AU-ROC score of 73.7% (without laboratory data), and for laboratory based data XGBoost performed the best at 84.4%. Top five predictors in diabetes patients were 1) waist size, 2) age, 3) self-reported weight, 4) leg length, and 5) sodium intake. For cardiovascular diseases the models identified 1) age, 2) systolic blood pressure, 3) self-reported weight, 4) occurrence of chest pain, and 5) diastolic blood pressure as key contributors.
    CONCLUSION: We conclude machine learned models based on survey questionnaire can provide an automated identification mechanism for patients at risk of diabetes and cardiovascular diseases. We also identify key contributors to the prediction, which can be further explored for their implications on electronic health records.
    Keywords:  Ensemble learning; Feature learning; Health analytics; Machine learning
    DOI:  https://doi.org/10.1186/s12911-019-0918-5
  13. BMC Med Inform Decis Mak. 2019 Nov 06. 19(1): 210
      BACKGROUND: For an effective artificial pancreas (AP) system and an improved therapeutic intervention with continuous glucose monitoring (CGM), predicting the occurrence of hypoglycemia accurately is very important. While there have been many studies reporting successful algorithms for predicting nocturnal hypoglycemia, predicting postprandial hypoglycemia still remains a challenge due to extreme glucose fluctuations that occur around mealtimes. The goal of this study is to evaluate the feasibility of easy-to-use, computationally efficient machine-learning algorithm to predict postprandial hypoglycemia with a unique feature set.METHODS: We use retrospective CGM datasets of 104 people who had experienced at least one hypoglycemia alert value during a three-day CGM session. The algorithms were developed based on four machine learning models with a unique data-driven feature set: a random forest (RF), a support vector machine using a linear function or a radial basis function, a K-nearest neighbor, and a logistic regression. With 5-fold cross-subject validation, the average performance of each model was calculated to compare and contrast their individual performance. The area under a receiver operating characteristic curve (AUC) and the F1 score were used as the main criterion for evaluating the performance.
    RESULTS: In predicting a hypoglycemia alert value with a 30-min prediction horizon, the RF model showed the best performance with the average AUC of 0.966, the average sensitivity of 89.6%, the average specificity of 91.3%, and the average F1 score of 0.543. In addition, the RF showed the better predictive performance for postprandial hypoglycemic events than other models.
    CONCLUSION: In conclusion, we showed that machine-learning algorithms have potential in predicting postprandial hypoglycemia, and the RF model could be a better candidate for the further development of postprandial hypoglycemia prediction algorithm to advance the CGM technology and the AP technology further.
    Keywords:  Diabetes; Hypoglycemia; Machine-learning approach; Risk prediction
    DOI:  https://doi.org/10.1186/s12911-019-0943-4
  14. JMIR Med Inform. 2019 Nov 08. 7(4): e14340
      BACKGROUND: Hypoglycemic events are common and potentially dangerous conditions among patients being treated for diabetes. Automatic detection of such events could improve patient care and is valuable in population studies. Electronic health records (EHRs) are valuable resources for the detection of such events.OBJECTIVE: In this study, we aim to develop a deep-learning-based natural language processing (NLP) system to automatically detect hypoglycemic events from EHR notes. Our model is called the High-Performing System for Automatically Detecting Hypoglycemic Events (HYPE).
    METHODS: Domain experts reviewed 500 EHR notes of diabetes patients to determine whether each sentence contained a hypoglycemic event or not. We used this annotated corpus to train and evaluate HYPE, the high-performance NLP system for hypoglycemia detection. We built and evaluated both a classical machine learning model (ie, support vector machines [SVMs]) and state-of-the-art neural network models.
    RESULTS: We found that neural network models outperformed the SVM model. The convolutional neural network (CNN) model yielded the highest performance in a 10-fold cross-validation setting: mean precision=0.96 (SD 0.03), mean recall=0.86 (SD 0.03), and mean F1=0.91 (SD 0.03).
    CONCLUSIONS: Despite the challenges posed by small and highly imbalanced data, our CNN-based HYPE system still achieved a high performance for hypoglycemia detection. HYPE can be used for EHR-based hypoglycemia surveillance and population studies in diabetes patients.
    Keywords:  adverse events; convolutional neural networks; hypoglycemia; natural language processing
    DOI:  https://doi.org/10.2196/14340
  15. Pancreas. 2019 Nov/Dec;48(10):48(10): 1250-1258
      A workshop on research gaps and opportunities for Precision Medicine in Pancreatic Disease was sponsored by the National Institute of Diabetes and Digestive Kidney Diseases on July 24, 2019, in Pittsburgh. The workshop included an overview lecture on precision medicine in cancer and 4 sessions: (1) general considerations for the application of bioinformatics and artificial intelligence; (2) omics, the combination of risk factors and biomarkers; (3) precision imaging; and (4) gaps, barriers, and needs to move from precision to personalized medicine for pancreatic disease. Current precision medicine approaches and tools were reviewed, and participants identified knowledge gaps and research needs that hinder bringing precision medicine to pancreatic diseases. Most critical were (a) multicenter efforts to collect large-scale patient data sets from multiple data streams in the context of environmental and social factors; (b) new information systems that can collect, annotate, and quantify data to inform disease mechanisms; (c) novel prospective clinical trial designs to test and improve therapies; and (d) a framework for measuring and assessing the value of proposed approaches to the health care system. With these advances, precision medicine can identify patients early in the course of their pancreatic disease and prevent progression to chronic or fatal illness.
    DOI:  https://doi.org/10.1097/MPA.0000000000001412
  16. JMIR Mhealth Uhealth. 2019 Nov 01. 7(11): e14452
      BACKGROUND: Type 2 diabetes mellitus (T2DM) is a major public health burden. Self-management of diabetes including maintaining a healthy lifestyle is essential for glycemic control and to prevent diabetes complications. Mobile-based health data can play an important role in the forecasting of blood glucose levels for lifestyle management and control of T2DM.OBJECTIVE: The objective of this work was to dynamically forecast daily glucose levels in patients with T2DM based on their daily mobile health lifestyle data including diet, physical activity, weight, and glucose level from the day before.
    METHODS: We used data from 10 T2DM patients who were overweight or obese in a behavioral lifestyle intervention using mobile tools for daily monitoring of diet, physical activity, weight, and blood glucose over 6 months. We developed a deep learning model based on long short-term memory-based recurrent neural networks to forecast the next-day glucose levels in individual patients. The neural network used several layers of computational nodes to model how mobile health data (food intake including consumed calories, fat, and carbohydrates; exercise; and weight) were progressing from one day to another from noisy data.
    RESULTS: The model was validated based on a data set of 10 patients who had been monitored daily for over 6 months. The proposed deep learning model demonstrated considerable accuracy in predicting the next day glucose level based on Clark Error Grid and ±10% range of the actual values.
    CONCLUSIONS: Using machine learning methodologies may leverage mobile health lifestyle data to develop effective individualized prediction plans for T2DM management. However, predicting future glucose levels is challenging as glucose level is determined by multiple factors. Future study with more rigorous study design is warranted to better predict future glucose levels for T2DM management.
    Keywords:  glucose level prediction; long short-term memory (LSTM)-based recurrent neural networks (RNNs); mobile health lifestyle data; type 2 diabetes
    DOI:  https://doi.org/10.2196/14452
  17. Can J Cardiol. 2019 Nov;pii: S0828-282X(19)31184-5. [Epub ahead of print]35(11): 1523-1533
      BACKGROUND: The diagnostic performance of coronary computed tomography angiography-derived fractional flow reserve (CT-FFR) in detecting ischemia in myocardial bridging (MB) has not been investigated to date.METHODS: This retrospective multicentre study included 104 patients with left anterior descending MBs. MB was classified as either superficial or deep, short, or long, whereas all MB vessels were further divided into <50%, 50% to 69%, and ≥70% groups, according to proximal lumen stenosis on invasive coronary angiography. Diagnostic performance and receiver operating characteristics (ROC) of CT-FFR to detect lesion-specific ischemia was assessed on a per-vessel level, using invasive FFR as reference standard. Intraclass correlation coefficient (ICC) and Bland-Altman plots were used for agreement measurement.
    RESULTS: Forty-eight MB vessels (46.2%) showed ischemia by invasive FFR (≤0.80). Sensitivity, specificity, and accuracy of CT-FFR to detect functional ischemia were 0.96 (0.85 to 0.99), 0.84 (0.71 to 0.92), and 0.89 (0.81 to 0.94), respectively, in all MB vessels. There were no differences in diagnostic performance between superficial and deep MB or between short and long MB (all P > 0.05). The accuracy of CT-FFR was 0.96 (0.85 to 0.99) in ≥70% stenosis, 0.82 (0.67 to 0.91) in 50% to 69% stenosis, and 0.89 (0.51 to 0.99) in <50% stenosis (P = 0.081). Bland-Altman analysis showed a slight mean difference between CT-FFR and invasive FFR of 0.014 (95% limit of agreement, -0.117 to 0.145). The ICC was 0.775 (95% confidence interval, 0.685-0.842, P < 0.001).
    CONCLUSIONS: CT-FFR demonstrated high diagnostic performance for identifying functional ischemia in vessels with MB and concomitant proximal atherosclerotic disease when compared with invasive FFR. However, the clinical use of CT-FFR in patients with MB needs further study for stronger and more robust results.
    DOI:  https://doi.org/10.1016/j.cjca.2019.08.026
  18. J Dent. 2019 Nov 05. pii: S0300-5712(19)30228-3. [Epub ahead of print] 103226
      OBJECTIVES: Convolutional neural networks (CNNs) are increasingly applied for medical image diagnostics. We performed a scoping review, exploring (1) use cases, (2) methodologies and (3) findings of studies applying CNN on dental image material.SOURCES: Medline via PubMed, IEEE Xplore, arXiv were searched.
    STUDY SELECTION: Full-text articles and conference-proceedings reporting CNN application on dental imagery were included.
    DATA: Thirty-six studies, published 2015-2019, were included, mainly from four countries (South Korea, United States, Japan, China). Studies focussed on general dentistry (n = 15 studies), cariology (n = 5), endodontics (n = 2), periodontology (n = 3), orthodontics (n = 3), dental radiology (2), forensic dentistry (n = 2) and general medicine (n = 4). Most often, the detection, segmentation or classification of anatomical structures, including teeth (n = 9), jaw bone (n = 2) and skeletal landmarks (n = 4) was performed. Detection of pathologies focused on caries (n = 3). The most commonly used image type were panoramic radiographs (n = 11), followed by periapical radiographs (n = 8), Cone-Beam CT or conventional CT (n = 6). Dataset sizes varied between 10-5,166 images (mean 1,053). Most studies used medical professionals to label the images and constitute the reference test. A large range of outcome metrics was employed, hampering comparisons across studies. A comparison of the CNN performance against an independent test group of dentists was provided by seven studies; most studies found the CNN to perform similar to dentists. Applicability or impact on treatment decision was not assessed at all.
    CONCLUSIONS: CNNs are increasingly employed for dental image diagnostics in research settings. Their usefulness, safety and generalizability should be demonstrated using more rigorous, replicable and comparable methodology.
    CLINICAL SIGNIFICANCE: CNNs may be used in diagnostic-assistance systems, thereby assisting dentists in a more comprehensive, systematic and faster evaluation and documentation of dental images. CNNs may become applicable in routine care; however, prior to that, the dental community should appraise them against the rules of evidence-based practice.
    Keywords:  Artificial Intelligence; CNNs; Dentistry; Diagnostics; Evidence-based Dentistry; Images
    DOI:  https://doi.org/10.1016/j.jdent.2019.103226
  19. JAMA Netw Open. 2019 Nov 01. 2(11): e1914672
      Importance: Automatic curation of consumer-generated, opioid-related social media big data may enable real-time monitoring of the opioid epidemic in the United States.Objective: To develop and validate an automatic text-processing pipeline for geospatial and temporal analysis of opioid-mentioning social media chatter.
    Design, Setting, and Participants: This cross-sectional, population-based study was conducted from December 1, 2017, to August 31, 2019, and used more than 3 years of publicly available social media posts on Twitter, dated from January 1, 2012, to October 31, 2015, that were geolocated in Pennsylvania. Opioid-mentioning tweets were extracted using prescription and illicit opioid names, including street names and misspellings. Social media posts (tweets) (n = 9006) were manually categorized into 4 classes, and training and evaluation of several machine learning algorithms were performed. Temporal and geospatial patterns were analyzed with the best-performing classifier on unlabeled data.
    Main Outcomes and Measures: Pearson and Spearman correlations of county- and substate-level abuse-indicating tweet rates with opioid overdose death rates from the Centers for Disease Control and Prevention WONDER database and with 4 metrics from the National Survey on Drug Use and Health for 3 years were calculated. Classifier performances were measured through microaveraged F1 scores (harmonic mean of precision and recall) or accuracies and 95% CIs.
    Results: A total of 9006 social media posts were annotated, of which 1748 (19.4%) were related to abuse, 2001 (22.2%) were related to information, 4830 (53.6%) were unrelated, and 427 (4.7%) were not in the English language. Yearly rates of abuse-indicating social media post showed statistically significant correlation with county-level opioid-related overdose death rates (n = 75) for 3 years (Pearson r = 0.451, P < .001; Spearman r = 0.331, P = .004). Abuse-indicating tweet rates showed consistent correlations with 4 NSDUH metrics (n = 13) associated with nonmedical prescription opioid use (Pearson r = 0.683, P = .01; Spearman r = 0.346, P = .25), illicit drug use (Pearson r = 0.850, P < .001; Spearman r = 0.341, P = .25), illicit drug dependence (Pearson r = 0.937, P < .001; Spearman r = 0.495, P = .09), and illicit drug dependence or abuse (Pearson r = 0.935, P < .001; Spearman r = 0.401, P = .17) over the same 3-year period, although the tests lacked power to demonstrate statistical significance. A classification approach involving an ensemble of classifiers produced the best performance in accuracy or microaveraged F1 score (0.726; 95% CI, 0.708-0.743).
    Conclusions and Relevance: The correlations obtained in this study suggest that a social media-based approach reliant on supervised machine learning may be suitable for geolocation-centric monitoring of the US opioid epidemic in near real time.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2019.14672