bims-arihec Biomed News
on Artificial Intelligence in Healthcare
Issue of 2019‒12‒22
fifteen papers selected by
Céline Bélanger
Cogniges Inc.


  1. Public Health Genomics. 2019 Dec 13. 1-17
      Artificial intelligence (AI) is changing the world we live in, and it has the potential to transform struggling healthcare systems with new efficiencies, new therapies, new diagnostics, and new economies. Already, AI is having an impact on healthcare, and new prospects of far greater advances open up daily. This paper sets out how AI can bring new precision to care, with benefits for patients and for society as a whole. But it also sets out the conditions for realizing the potential: key issues are ensuring adequate access to data, an appropriate regulatory environment, action to sustain innovation in research institutes and industry big and small, promotion of take-up of innovation by the healthcare establishment, and resolution of a range of vital legal and ethical questions centred on safeguarding patients and their rights. For Europe to fulfil the conditions for success, it will have to find a new spirit of cooperation that can overcome the handicaps of the continent's fragmented technical and legal landscape. The start the European Union has made shows some ambition, but a clearer strategic vision and firmer plans for implementation will be needed. The European Alliance for Personalised Medicine (EAPM) has listed its own priorities: data, integrating innovation into care, building trust, developing skills and constructing policy frameworks that guarantee infrastructure, equitable access, and legal clarity.
    Keywords:  Artificial intelligence; Big data; Commission; Diagnostics; Digital health; Enablers; European Union; Genomics; Information; Information and communication technology; Innovation; Machine learning; Member States; Personalised healthcare; Personalised medicine; Precision medicine; Regulatory framework; Systems; Value
    DOI:  https://doi.org/10.1159/000504785
  2. Eur J Radiol. 2019 Dec 11. pii: S0720-048X(19)30424-3. [Epub ahead of print]123 108774
      Artificial intelligence is a hot topic in medical imaging. The development of deep learning methods and in particular the use of convolutional neural networks (CNNs), have led to substantial performance gain over the classic machine learning techniques. Multiple usages are currently being evaluated, especially for thoracic imaging, such as such as lung nodule evaluation, tuberculosis or pneumonia detection or quantification of diffuse lung diseases. Chest radiography is a near perfect domain for the development of deep learning algorithms for automatic interpretation, requiring large annotated datasets, in view of the high number of procedures and increasing data availability. Current algorithms are able to detect up to 14 common anomalies, when present as isolated findings. Chest computed tomography is another major field of application for artificial intelligence, especially in the perspective of large scale lung cancer screening. It is important for radiologists to apprehend, contribute actively and lead this new era of radiology powered by artificial intelligence. Such a perspective requires understanding new terms and concepts associated with machine learning. The objective of this paper is to provide useful definitions for understanding the methods used and their possibilities, and report current and future developments for thoracic imaging. Prospective validation of AI tools will be required before reaching routine clinical implementation.
    Keywords:  Artificial intelligence; Deep learning; Machine learning; Thoracic imaging
    DOI:  https://doi.org/10.1016/j.ejrad.2019.108774
  3. J Orofac Orthop. 2019 Dec 18.
      PURPOSE: The aim of this investigation was to create an automated cephalometric X‑ray analysis using a specialized artificial intelligence (AI) algorithm. We compared the accuracy of this analysis to the current gold standard (analyses performed by human experts) to evaluate precision and clinical application of such an approach in orthodontic routine.METHODS: For training of the network, 12 experienced examiners identified 18 landmarks on a total of 1792 cephalometric X‑rays. To evaluate quality of the predictions of the AI, both AI and each examiner analyzed 12 commonly used orthodontic parameters on a basis of 50 cephalometric X‑rays that were not part of the training data for the AI. Median values of the 12 examiners for each parameter were defined as humans' gold standard and compared to the AI's predictions.
    RESULTS: There were almost no statistically significant differences between humans' gold standard and the AI's predictions. Differences between the two analyses do not seem to be clinically relevant.
    CONCLUSIONS: We created an AI algorithm able to analyze unknown cephalometric X‑rays at almost the same quality level as experienced human examiners (current gold standard). This study is one of the first to successfully enable implementation of AI into dentistry, in particular orthodontics, satisfying medical requirements.
    Keywords:  Algorithms; Cephalometric X‑rays; Deep learning; Machine learning; Medical imaging
    DOI:  https://doi.org/10.1007/s00056-019-00203-8
  4. Ann Vasc Surg. 2019 Dec 16. pii: S0890-5096(19)31033-7. [Epub ahead of print]
      Artificial intelligence (AI) corresponds to a broad discipline that aims to design systems which display properties of human intelligence. While it has led to many advances and applications in daily life, its introduction in medicine is still in its infancy. AI has created interesting perspectives for medical research and clinical practice but has been sometimes associated with hype leading to a misunderstanding of its real capabilities. Here, we aim to introduce the fundamentals notions of AI and to bring an overview of its potential applications for medical and surgical practice. In the limelight of current knowledge, limits and challenges to face as well as future directions are discussed.
    Keywords:  artificial intelligence; big data; deep learning; machine learning
    DOI:  https://doi.org/10.1016/j.avsg.2019.11.037
  5. Paediatr Anaesth. 2019 Dec 17.
      Artificial intelligence and machine learning are rapidly expanding fields with increasing relevance in anesthesia and in particular, airway management. The ability of artificial intelligence and machine learning algorithms to recognize patterns from large volumes of complex data makes them attractive for use in pediatric anesthesia airway management. The purpose of this review is to introduce artificial intelligence, machine learning, and deep learning to the pediatric anesthesiologist. Current evidence and developments in artificial intelligence, machine learning and deep learning relevant to pediatric airway management are presented. We critically assess the current evidence on the use of artificial intelligence and machine learning in the assessment, diagnosis, monitoring, procedure assistance, and predicting outcomes during pediatric airway management. Further, we discuss the limitations of these technologies and offer areas for focused research that may bring pediatric airway management anesthesiology into the era of artificial intelligence and machine learning.
    DOI:  https://doi.org/10.1111/pan.13792
  6. Cancers (Basel). 2019 Dec 12. pii: E2007. [Epub ahead of print]11(12):
      The prediction of tumor in the TNM staging (tumor, node, and metastasis) stage of colon cancer using the most influential histopathology parameters and to predict the five years disease-free survival (DFS) period using machine learning (ML) in clinical research have been studied here. From the colorectal cancer (CRC) registry of Chang Gung Memorial Hospital, Linkou, Taiwan, 4021 patients were selected for the analysis. Various ML algorithms were applied for the tumor stage prediction of the colon cancer by considering the Tumor Aggression Score (TAS) as a prognostic factor. Performances of different ML algorithms were evaluated using five-fold cross-validation, which is an effective way of the model validation. The accuracy achieved by the algorithms taking both cases of standard TNM staging and TNM staging with the Tumor Aggression Score was determined. It was observed that the Random Forest model achieved an F-measure of 0.89, when the Tumor Aggression Score was considered as an attribute along with the standard attributes normally used for the TNM stage prediction. We also found that the Random Forest algorithm outperformed all other algorithms, with an accuracy of approximately 84% and an area under the curve (AUC) of 0.82 ± 0.10 for predicting the five years DFS.
    Keywords:  TNM staging; artificial intelligence; colon cancer; disease-free survival; machine learning; prediction
    DOI:  https://doi.org/10.3390/cancers11122007
  7. Front Med. 2019 Dec 16.
      As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.
    Keywords:  deep learning; neural networks; pulmonary medical image; survey
    DOI:  https://doi.org/10.1007/s11684-019-0726-4
  8. Front Oncol. 2019 ;9 1296
      Bladder cancer is a fatal cancer that happens in the genitourinary tract with quite high morbidity and mortality annually. The high level of recurrence rate ranging from 50 to 80% makes bladder cancer one of the most challenging and costly diseases to manage. Faced with various problems in existing methods, a recently emerging concept for the measurement of imaging biomarkers and extraction of quantitative features called "radiomics" shows great potential in the application of detection, grading, and follow-up management of bladder cancer. Furthermore, machine-learning (ML) algorithms on the basis of "big data" are fueling the powers of radiomics for bladder cancer monitoring in the era of precision medicine. Currently, the usefulness of the novel combination of radiomics and ML has been demonstrated by a large number of successful cases. It possesses outstanding strengths including non-invasiveness, low cost, and high efficiency, which may serve as a revolution to tumor assessment and emancipate workforce. However, for the extensive clinical application in the future, more efforts should be made to break down the limitations caused by technology deficiencies, inherent problems during the process of radiomic analysis, as well as the quality of present studies.
    Keywords:  bladder cancer; full-cycle management; machine learning; precision medicine; radiomics
    DOI:  https://doi.org/10.3389/fonc.2019.01296
  9. World J Crit Care Med. 2019 Nov 19. 8(7): 120-126
      BACKGROUND: With the recent change in the definition (Sepsis-3 Definition) of sepsis and septic shock, an electronic search algorithm was required to identify the cases for data automation. This supervised machine learning method would help screen a large amount of electronic medical records (EMR) for efficient research purposes.AIM: To develop and validate a computable phenotype via supervised machine learning method for retrospectively identifying sepsis and septic shock in critical care patients.
    METHODS: A supervised machine learning method was developed based on culture orders, Sequential Organ Failure Assessment (SOFA) scores, serum lactate levels and vasopressor use in the intensive care units (ICUs). The computable phenotype was derived from a retrospective analysis of a random cohort of 100 patients admitted to the medical ICU. This was then validated in an independent cohort of 100 patients. We compared the results from computable phenotype to a gold standard by manual review of EMR by 2 blinded reviewers. Disagreement was resolved by a critical care clinician. A SOFA score ≥ 2 during the ICU stay with a culture 72 h before or after the time of admission was identified. Sepsis versions as V1 was defined as blood cultures with SOFA ≥ 2 and Sepsis V2 was defined as any culture with SOFA score ≥ 2. A serum lactate level ≥ 2 mmol/L from 24 h before admission till their stay in the ICU and vasopressor use with Sepsis-1 and-2 were identified as Septic Shock-V1 and-V2 respectively.
    RESULTS: In the derivation subset of 100 random patients, the final machine learning strategy achieved a sensitivity-specificity of 100% and 84% for Sepsis-1, 100% and 95% for Sepsis-2, 78% and 80% for Septic Shock-1, and 80% and 90% for Septic Shock-2. An overall percent of agreement between two blinded reviewers had a k = 0.86 and 0.90 for Sepsis 2 and Septic shock 2 respectively. In validation of the algorithm through a separate 100 random patient subset, the reported sensitivity and specificity for all 4 diagnoses were 100%-100% each.
    CONCLUSION: Supervised machine learning for identification of sepsis and septic shock is reliable and an efficient alternative to manual chart review.
    Keywords:  Computable phenotype; Critical care; Machine learning; Sepsis; Septic shock
    DOI:  https://doi.org/10.5492/wjccm.v8.i7.120
  10. Comput Methods Programs Biomed. 2019 Nov 27. pii: S0169-2607(19)31094-6. [Epub ahead of print]187 105242
      Alzheimer's Disease (AD) is one of the leading causes of death in developed countries. From a research point of view, impressive results have been reported using computer-aided algorithms, but clinically no practical diagnostic method is available. In recent years, deep models have become popular, especially in dealing with images. Since 2013, deep learning has begun to gain considerable attention in AD detection research, with the number of published papers in this area increasing drastically since 2017. Deep models have been reported to be more accurate for AD detection compared to general machine learning techniques. Nevertheless, AD detection is still challenging, and for classification, it requires a highly discriminative feature representation to separate similar brain patterns. This paper reviews the current state of AD detection using deep learning. Through a systematic literature review of over 100 articles, we set out the most recent findings and trends. Specifically, we review useful biomarkers and features (personal information, genetic data, and brain scans), the necessary pre-processing steps, and different ways of dealing with neuroimaging data originating from single-modality and multi-modality studies. Deep models and their performance are described in detail. Although deep learning has achieved notable performance in detecting AD, there are several limitations, especially regarding the availability of datasets and training procedures.
    Keywords:  Alzheimer's disease; Auto-encoders; Convolutional neural networks; Deep learning; Recurrent neural networks; Transfer learning
    DOI:  https://doi.org/10.1016/j.cmpb.2019.105242
  11. Cardiovasc Res. 2019 Dec 19. pii: cvz321. [Epub ahead of print]
      AIMS: Our aim was to evaluate the performance of machine learning (ML), integrating clinical parameters with coronary artery calcium (CAC), and automated epicardial adipose tissue (EAT) quantification, for the prediction of long-term risk of myocardial infarction (MI) and cardiac death in asymptomatic subjects.METHODS AND RESULTS: Our study included 1912 asymptomatic subjects [1117 (58.4%) male, age: 55.8 ± 9.1 years] from the prospective EISNER trial with long-term follow-up after CAC scoring. EAT volume and density were quantified using a fully automated deep learning method. ML extreme gradient boosting was trained using clinical co-variates, plasma lipid panel measurements, risk factors, CAC, aortic calcium, and automated EAT measures, and validated using repeated 10-fold cross validation. During mean follow-up of 14.5 ± 2 years, 76 events of MI and/or cardiac death occurred. ML obtained a significantly higher AUC than atherosclerotic cardiovascular disease (ASCVD) risk and CAC score for predicting events (ML: 0.82; ASCVD: 0.77; CAC: 0.77, P < 0.05 for all). Subjects with a higher ML score (by Youden's index) had high hazard of suffering events (HR: 10.38, P < 0.001); the relationships persisted in multivariable analysis including ASCVD-risk and CAC measures (HR: 2.94, P = 0.005). Age, ASCVD-risk, and CAC were prognostically important for both genders. Systolic blood pressure was more important than cholesterol in women, and the opposite in men.
    CONCLUSIONS: In this prospective study, machine learning used to integrate clinical and quantitative imaging-based variables significantly improves prediction of MI and cardiac death compared with standard clinical risk assessment. Following further validation, such a personalized paradigm could potentially be used to improve cardiovascular risk assessment.
    Keywords:  Coronary calcium scoring; Epicardial adipose tissue; Machine learning; Myocardial infarction and cardiac death
    DOI:  https://doi.org/10.1093/cvr/cvz321
  12. Front Physiol. 2019 ;10 1416
      Skeletal muscle injury provokes a regenerative response, characterized by the de novo generation of myofibers that are distinguished by central nucleation and re-expression of developmentally restricted genes. In addition to these characteristics, myofiber cross-sectional area (CSA) is widely used to evaluate muscle hypertrophic and regenerative responses. Here, we introduce QuantiMus, a free software program that uses machine learning algorithms to quantify muscle morphology and molecular features with high precision and quick processing-time. The ability of QuantiMus to define and measure myofibers was compared to manual measurement or other automated software programs. QuantiMus rapidly and accurately defined total myofibers and measured CSA with comparable performance but quantified the CSA of centrally-nucleated fibers (CNFs) with greater precision compared to other software. It additionally quantified the fluorescence intensity of individual myofibers of human and mouse muscle, which was used to assess the distribution of myofiber type, based on the myosin heavy chain isoform that was expressed. Furthermore, analysis of entire quadriceps cross-sections of healthy and mdx mice showed that dystrophic muscle had an increased frequency of Evans blue dye+ injured myofibers. QuantiMus also revealed that the proportion of centrally nucleated, regenerating myofibers that express embryonic myosin heavy chain (eMyHC) or neural cell adhesion molecule (NCAM) were increased in dystrophic mice. Our findings reveal that QuantiMus has several advantages over existing software. The unique self-learning capacity of the machine learning algorithms provides superior accuracy and the ability to rapidly interrogate the complete muscle section. These qualities increase rigor and reproducibility by avoiding methods that rely on the sampling of representative areas of a section. This is of particular importance for the analysis of dystrophic muscle given the "patchy" distribution of muscle pathology. QuantiMus is an open source tool, allowing customization to meet investigator-specific needs and provides novel analytical approaches for quantifying muscle morphology.
    Keywords:  Duchenne muscular dystrophy; central nucleation; cross-sectional area; histological analysis; machine learning; mdx; muscle regeneration; myofiber typing
    DOI:  https://doi.org/10.3389/fphys.2019.01416
  13. ACS Nano. 2019 Dec 18.
      Caused by the tick-borne spirochete Borrelia burgdorferi, Lyme disease (LD) is the most common vector-borne infectious disease in North America and Europe. Though timely diagnosis and treatment are effective in preventing disease progression, current tests are insensitive in early stage LD, with a sensitivity of <50%. Additionally, the serological testing currently recommended by the U.S. Center for Disease Control has high costs (>$400/test) and extended sample-to-answer timelines (>24 h). To address these challenges, we created a cost-effective and rapid point-of-care (POC) test for early-stage LD that assays for antibodies specific to seven Borrelia antigens and a synthetic peptide in a paper-based multiplexed vertical flow assay (xVFA). We trained a deep-learning-based diagnostic algorithm to select an optimal subset of antigen/peptide targets and then blindly tested our xVFA using human samples (N(+) = 42, N(-) = 54), achieving an area-under-the-curve (AUC), sensitivity, and specificity of 0.950, 90.5%, and 87.0%, respectively, outperforming previous LD POC tests. With batch-specific standardization and threshold tuning, the specificity of our blind-testing performance improved to 96.3%, with an AUC and sensitivity of 0.963 and 85.7%, respectively.
    Keywords:  Lyme disease; machine learning; multiplexed immunoassay; paper-based immunoassay; point-of-care testing
    DOI:  https://doi.org/10.1021/acsnano.9b08151
  14. Curr Med Res Opin. 2019 Dec 19. 1
      Aims: Some hypoglycemic therapies were associated with lower risk of cardiovascular outcomes. We investigated the incidence of cardiovascular disease among patients with type 2 diabetes using antidiabetic drugs from three classes, which were sodium-glucose co-transporter-2 inhibitors (SGLT-2is), glucagon-like peptide-1 receptor agonists (GLP-1RAs), or dipeptidyl peptidase-4 inhibitors (DPP-4is).Materials and methods: We compared the risk of myocardial infarction (MI) among these drugs and developed a machine learning model for predicting MI in patients without prior heart disease. We analyzed US health plan data for patients without prior MI or insulin therapy who were aged ≥40 years at initial prescription and had not received oral antidiabetic drugs for ≥6 months previously. After developing a machine learning model to predict MI, proportional hazards analysis of MI incidence was conducted using the risk obtained with this model and the drug classes as explanatory variables.Results: We analyzed 199,116 patients (mean age: years), comprising 110,278 (58.6) prescribed DPP-4is, 43,538 (55.1) prescribed GLP-1RAs, and 45,300 (55.3) prescribed SGLT-2is. Receiver operating characteristics analysis showed higher precision of machine learning over logistic regression analysis. Proportional hazards analysis by machine learning revealed a significantly lower risk of MI with SGLT-2is or GLP-1RAs than DPP-4is (hazard ratio: 0.81, 95% confidence interval: 0.72-0.91, p = 0.0004 vs. 0.63, 0.56-0.72, p < 0.0001). MI risk was also significantly lower with GLP-1RAs than SGLT-2is (0.77, 0.66-0.90, p = 0.001).Limitations: All patients analyzed were covered by US commercial health plans, so information on patients aged ≥65 years was limited and the socioeconomic background may have been biased. Also, the observation period differed among the three classes of drugs due to differing release dates.Conclusions: Machine learning analysis suggested the risk of MI was 37% lower for type 2 diabetes patients without prior MI using GLP-1RAs versus DPP-4is, while the risk was 19% lower for SGLT-2is versus DPP-4is.
    Keywords:  cardiovascular disease; machine learning; myocardial infarction; oral antidiabetic drugs; type 2 diabetes
    DOI:  https://doi.org/10.1080/03007995.2019.1706043