bims-arihec Biomed News
on Artificial intelligence in healthcare
Issue of 2019–12–01
eightteen papers selected by
Céline Bélanger, Cogniges Inc.



  1. BMJ Health Care Inform. 2019 Nov;pii: e100081. [Epub ahead of print]26(1):
      The use of artificial intelligence (AI) in patient care can offer significant benefits. However, there is a lack of independent evaluation considering AI in use. The paper argues that consideration should be given to how AI will be incorporated into clinical processes and services. Human factors challenges that are likely to arise at this level include cognitive aspects (automation bias and human performance), handover and communication between clinicians and AI systems, situation awareness and the impact on the interaction with patients. Human factors research should accompany the development of AI from the outset.
    Keywords:  computer methodologies; information systems; patient care
    DOI:  https://doi.org/10.1136/bmjhci-2019-100081
  2. JMIR Aging. 2019 Nov 29. 2(2): e15429
       BACKGROUND: The increase in life expectancy and recent advancements in technology and medical science have changed the way we deliver health services to the aging societies. Evidence suggests that home telemonitoring can significantly decrease the number of readmissions, and continuous monitoring of older adults' daily activities and health-related issues might prevent medical emergencies.
    OBJECTIVE: The primary objective of this review was to identify advances in assistive technology devices for seniors and aging-in-place technology and to determine the level of evidence for research on remote patient monitoring, smart homes, telecare, and artificially intelligent monitoring systems.
    METHODS: A literature review was conducted using Cumulative Index to Nursing and Allied Health Literature Plus, MEDLINE, EMBASE, Institute of Electrical and Electronics Engineers Xplore, ProQuest Central, Scopus, and Science Direct. Publications related to older people's care, independent living, and novel assistive technologies were included in the study.
    RESULTS: A total of 91 publications met the inclusion criteria. In total, four themes emerged from the data: technology acceptance and readiness, novel patient monitoring and smart home technologies, intelligent algorithm and software engineering, and robotics technologies. The results revealed that most studies had poor reference standards without an explicit critical appraisal.
    CONCLUSIONS: The use of ubiquitous in-home monitoring and smart technologies for aged people's care will increase their independence and the health care services available to them as well as improve frail elderly people's health care outcomes. This review identified four different themes that require different conceptual approaches to solution development. Although the engineering teams were focused on prototype and algorithm development, the medical science teams were concentrated on outcome research. We also identified the need to develop custom technology solutions for different aging societies. The convergence of medicine and informatics could lead to the development of new interdisciplinary research models and new assistive products for the care of older adults.
    Keywords:  artificially intelligent home monitoring; innovative assisted living tools for aging society; older adults; robotic technologies; smart home
    DOI:  https://doi.org/10.2196/15429
  3. Gut. 2019 Nov 28. pii: gutjnl-2019-319292. [Epub ahead of print]
       OBJECTIVE: Diagnostic tests, such as Immunoscore, predict prognosis in patients with colon cancer. However, additional prognostic markers could be detected on pathological slides using artificial intelligence tools.
    DESIGN: We have developed a software to detect colon tumour, healthy mucosa, stroma and immune cells on CD3 and CD8 stained slides. The lymphocyte density and surface area were quantified automatically in the tumour core (TC) and invasive margin (IM). Using a LASSO algorithm, DGMate (DiGital tuMor pArameTErs), we detected digital parameters within the tumour cells related to patient outcomes.
    RESULTS: Within the dataset of 1018 patients, we observed that a poorer relapse-free survival (RFS) was associated with high IM stromal area (HR 5.65; 95% CI 2.34 to 13.67; p<0.0001) and high DGMate (HR 2.72; 95% CI 1.92 to 3.85; p<0.001). Higher CD3+ TC, CD3+ IM and CD8+ TC densities were significantly associated with a longer RFS. Analysis of variance showed that CD3+ TC yielded a similar prognostic value to the classical CD3/CD8 Immunoscore (p=0.44). A combination of the IM stromal area, DGMate and CD3, designated 'DGMuneS', outperformed Immunoscore when used in estimating patients' prognosis (C-index=0.601 vs 0.578, p=0.04) and was independently associated with patient outcomes following Cox multivariate analysis. A predictive nomogram based on DGMuneS and clinical variables identified a group of patients with less than 10% relapse risk and another group with a 50% relapse risk.
    CONCLUSION: These findings suggest that artificial intelligence can potentially improve patient care by assisting pathologists in better defining stage III colon cancer patients' prognosis.
    Keywords:  adjuvant treatment; colorectal cancer; computerised image analysis; immunohistopathology
    DOI:  https://doi.org/10.1136/gutjnl-2019-319292
  4. J Med Internet Res. 2019 Nov 27. 21(11): e15787
       BACKGROUND: The data regarding the use of conversational agents in oncology are scarce.
    OBJECTIVE: The aim of this study was to verify whether an artificial conversational agent was able to provide answers to patients with breast cancer with a level of satisfaction similar to the answers given by a group of physicians.
    METHODS: This study is a blind, noninferiority randomized controlled trial that compared the information given by the chatbot, Vik, with that given by a multidisciplinary group of physicians to patients with breast cancer. Patients were women with breast cancer in treatment or in remission. The European Organisation for Research and Treatment of Cancer Quality of Life Group information questionnaire (EORTC QLQ-INFO25) was adapted and used to compare the quality of the information provided to patients by the physician or the chatbot. The primary outcome was to show that the answers given by the Vik chatbot to common questions asked by patients with breast cancer about their therapy management are at least as satisfying as answers given by a multidisciplinary medical committee by comparing the success rate in each group (defined by a score above 3). The secondary objective was to compare the average scores obtained by the chatbot and physicians for each INFO25 item.
    RESULTS: A total of 142 patients were included and randomized into two groups of 71. They were all female with a mean age of 42 years (SD 19). The success rates (as defined by a score >3) was 69% (49/71) in the chatbot group versus 64% (46/71) in the physicians group. The binomial test showed the noninferiority (P<.001) of the chatbot's answers.
    CONCLUSIONS: This is the first study that assessed an artificial conversational agent used to inform patients with cancer. The EORTC INFO25 scores from the chatbot were found to be noninferior to the scores of the physicians. Artificial conversational agents may save patients with minor health concerns from a visit to the doctor. This could allow clinicians to spend more time to treat patients who need a consultation the most.
    TRIAL REGISTRATION: Clinicaltrials.gov NCT03556813, https://tinyurl.com/rgtlehq.
    Keywords:  cancer; chatbot; clinical trial
    DOI:  https://doi.org/10.2196/15787
  5. Am J Gastroenterol. 2019 Nov 26.
      Most colorectal polyps are diminutive, and malignant potential for these polyps is uncommon, especially for those in the rectosigmoid. However, many diminutive polyps are still being resected to determine whether these are adenomas or serrated/hyperplastic polyps. Resecting all the diminutive polyps is not cost-effective. Therefore, gastroenterologists have proposed optical diagnosis using image-enhanced endoscopy for polyp characterization. These technologies have achieved favorable outcomes, but are not widely available. Artificial intelligence has been used in clinical medicine to classify lesions. Here, artificial intelligence technology for the characterization of colorectal polyps is discussed in a decision-making context regarding diminutive colorectal polyps.
    DOI:  https://doi.org/10.14309/ajg.0000000000000476
  6. Acad Radiol. 2019 Nov 20. pii: S1076-6332(19)30484-2. [Epub ahead of print]
      Tuberculosis is a leading cause of death from infectious disease worldwide, and is an epidemic in many developing nations. Countries where the disease is common also tend to have poor access to medical care, including diagnostic tests. Recent advancements in artificial intelligence may help to bridge this gap. In this article, we review the applications of artificial intelligence in the diagnosis of tuberculosis using chest radiography, covering simple computer-aided diagnosis systems to more advanced deep learning algorithms. In so doing, we will demonstrate an area where artificial intelligence could make a substantial contribution to global health through improved diagnosis in the future.
    Keywords:  Artificial intelligence; Computer-aided diagnosis; Deep learning; Global health; Tuberculosis
    DOI:  https://doi.org/10.1016/j.acra.2019.10.003
  7. Healthc Inform Res. 2019 Oct;25(4): 248-261
       Objectives: The incidence of type 2 diabetes mellitus has increased significantly in recent years. With the development of artificial intelligence applications in healthcare, they are used for diagnosis, therapeutic decision making, and outcome prediction, especially in type 2 diabetes mellitus. This study aimed to identify the artificial intelligence (AI) applications for type 2 diabetes mellitus care.
    Methods: This is a review conducted in 2018. We searched the PubMed, Web of Science, and Embase scientific databases, based on a combination of related mesh terms. The article selection process was based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Finally, 31 articles were selected after inclusion and exclusion criteria were applied. Data gathering was done by using a data extraction form. Data were summarized and reported based on the study objectives.
    Results: The main applications of AI for type 2 diabetes mellitus care were screening and diagnosis in different stages. Among all of the reviewed AI methods, machine learning methods with 71% (n = 22) were the most commonly applied techniques. Many applications were in multi method forms (23%). Among the machine learning algorithms applications, support vector machine (21%) and naive Bayesian (19%) were the most commonly used methods. The most important variables that were used in the selected studies were body mass index, fasting blood sugar, blood pressure, HbA1c, triglycerides, low-density lipoprotein, high-density lipoprotein, and demographic variables.
    Conclusions: It is recommended to select optimal algorithms by testing various techniques. Support vector machine and naive Bayesian might achieve better performance than other applications due to the type of variables and targets in diabetes-related outcomes classification.
    Keywords:  Artificial Intelligence; Diabetes Care; Diabetes Mellitus; Health Informatics; Machine Learning
    DOI:  https://doi.org/10.4258/hir.2019.25.4.248
  8. J Neurosurg Spine. 2019 Nov 29. pii: 2019.9.SPINE19860. [Epub ahead of print] 1-8
       OBJECTIVE: Unplanned preventable hospital readmissions within 30 days are a great burden to patients and the healthcare system. With an estimated $41.3 billion spent yearly, reducing such readmission rates is of the utmost importance. With the widespread adoption of big data and machine learning, clinicians can use these analytical tools to understand these complex relationships and find predictive factors that can be generalized to future patients. The object of this study was to assess the efficacy of a machine learning algorithm in the prediction of 30-day hospital readmission after posterior spinal fusion surgery.
    METHODS: The authors analyzed the distribution of National Surgical Quality Improvement Program (NSQIP) posterior lumbar fusions from 2011 to 2016 by using machine learning techniques to create a model predictive of hospital readmissions. A deep neural network was trained using 177 unique input variables. The model was trained and tested using cross-validation, in which the data were randomly partitioned into training (n = 17,448 [75%]) and testing (n = 5816 [25%]) data sets. In training, the 17,448 training cases were fed through a series of 7 layers, each with varying degrees of forward and backward communicating nodes (neurons).
    RESULTS: Mean and median positive predictive values were 78.5% and 78.0%, respectively. Mean and median negative predictive values were both 97%, respectively. Mean and median areas under the curve for the model were 0.812 and 0.810, respectively. The five most heavily weighted inputs were (in order of importance) return to the operating room, septic shock, superficial surgical site infection, sepsis, and being on a ventilator for > 48 hours.
    CONCLUSIONS: Machine learning and artificial intelligence are powerful tools with the ability to improve understanding of predictive metrics in clinical spine surgery. The authors' model was able to predict those patients who would not require readmission. Similarly, the majority of predicted readmissions (up to 60%) were predicted by the model while retaining a 0% false-positive rate. Such findings suggest a possible need for reevaluation of the current Hospital Readmissions Reduction Program penalties in spine surgery.
    Keywords:  30-day hospital readmissions; AUC = area under the curve; CPT = Current Procedural Terminology; DNN = deep neural network; HRRP = Hospital Readmissions Reduction Program; Hospital Readmissions Reduction Program; INR = international normalized ratio; NPV = negative predictive value; NSQIP = National Surgical Quality Improvement Program; PPV = positive predictive value; ROC = receiver operating characteristic; artificial intelligence; machine learning; posterior lumbar fusions
    DOI:  https://doi.org/10.3171/2019.9.SPINE19860
  9. World J Urol. 2019 Nov 27.
       PURPOSE: The aim of the current narrative review was to summarize the available evidence in the literature on artificial intelligence (AI) methods that have been applied during robotic surgery.
    METHODS: A narrative review of the literature was performed on MEDLINE/Pubmed and Scopus database on the topics of artificial intelligence, autonomous surgery, machine learning, robotic surgery, and surgical navigation, focusing on articles published between January 2015 and June 2019. All available evidences were analyzed and summarized herein after an interactive peer-review process of the panel.
    LITERATURE REVIEW: The preliminary results of the implementation of AI in clinical setting are encouraging. By providing a readout of the full telemetry and a sophisticated viewing console, robot-assisted surgery can be used to study and refine the application of AI in surgical practice. Machine learning approaches strengthen the feedback regarding surgical skills acquisition, efficiency of the surgical process, surgical guidance and prediction of postoperative outcomes. Tension-sensors on the robotic arms and the integration of augmented reality methods can help enhance the surgical experience and monitor organ movements.
    CONCLUSIONS: The use of AI in robotic surgery is expected to have a significant impact on future surgical training as well as enhance the surgical experience during a procedure. Both aim to realize precision surgery and thus to increase the quality of the surgical care. Implementation of AI in master-slave robotic surgery may allow for the careful, step-by-step consideration of autonomous robotic surgery.
    Keywords:  Artificial intelligence; Autonomous surgery; Machine learning; Robotic surgery; Surgical navigation
    DOI:  https://doi.org/10.1007/s00345-019-03037-6
  10. Epilepsia. 2019 Nov 29.
       OBJECTIVE: Delay to resective epilepsy surgery results in avoidable disease burden and increased risk of mortality. The objective was to prospectively validate a natural language processing (NLP) application that uses provider notes to assign epilepsy surgery candidacy scores.
    METHODS: The application was trained on notes from (1) patients with a diagnosis of epilepsy and a history of resective epilepsy surgery and (2) patients who were seizure-free without surgery. The testing set included all patients with unknown surgical candidacy status and an upcoming neurology visit. Training and testing sets were updated weekly for 1 year. One- to three-word phrases contained in patients' notes were used as features. Patients prospectively identified by the application as candidates for surgery were manually reviewed by two epileptologists. Performance metrics were defined by comparing NLP-derived surgical candidacy scores with surgical candidacy status from expert chart review.
    RESULTS: The training set was updated weekly and included notes from a mean of 519 ± 67 patients. The area under the receiver operating characteristic curve (AUC) from 10-fold cross-validation was 0.90 ± 0.04 (range = 0.83-0.96) and improved by 0.002 per week (P < .001) as new patients were added to the training set. Of the 6395 patients who visited the neurology clinic, 4211 (67%) were evaluated by the model. The prospective AUC on this test set was 0.79 (95% confidence interval [CI] = 0.62-0.96). Using the optimal surgical candidacy score threshold, sensitivity was 0.80 (95% CI = 0.29-0.99), specificity was 0.77 (95% CI = 0.64-0.88), positive predictive value was 0.25 (95% CI = 0.07-0.52), and negative predictive value was 0.98 (95% CI = 0.87-1.00). The number needed to screen was 5.6.
    SIGNIFICANCE: An electronic health record-integrated NLP application can accurately assign surgical candidacy scores to patients in a clinical setting.
    Keywords:  clinical decision support; epilepsy surgery; machine learning; natural language processing
    DOI:  https://doi.org/10.1111/epi.16398
  11. Gastroenterology. 2019 Nov 21. pii: S0016-5085(19)41586-2. [Epub ahead of print]
       BACKGROUND & AIMS: We aimed to develop and validate a deep-learning computer-aided detection (CAD) system, suitable for use in real time in clinical practice, to improve endoscopic detection of early neoplasia in patients with Barrett's esophagus (BE).
    METHODS: We developed a hybrid ResNet-UNet model CAD system using 5 independent endoscopy datasets. We performed pre-training using 494,364 labelled endoscopic images collected from all intestinal segments. Then, we used 1704 unique esophageal high-resolution images of rigorously confirmed early-stage neoplasia from patients with BE, and non-dysplastic BE, derived from 669 patients. System performance was assessed using datasets 4 and 5. Dataset 5 was also scored by 53 general endoscopists with a wide range of experience from 4 countries to benchmark CAD system performance. Coupled with histopathology findings, scoring of images that contained early-stage neoplasia in datasets 2-5 were delineated in detail for neoplasm position and extent by multiple experts whose evaluations served as the ground truth for segmentation.
    RESULTS: The CAD system classified images as containing neoplasms or non-dysplastic BE with 89% accuracy, 90% sensitivity, and 88% specificity (dataset 4, 80 patients and images). In dataset 5 (80 patients and images) values for the CAD system vs those of the general endoscopists were 88% vs 73% accuracy, 93% vs 72% sensitivity, and 83% vs 74% specificity. The CAD system achieved higher accuracy than any of the individual 53 non-expert endoscopists, with comparable delineation performance. CAD delineations of the area of neoplasm overlapped with those from the BE experts in all detected neoplasia in datasets 4 and 5. The CAD system identified the optimal site for biopsy of detected neoplasia in 97% and 92% of cases (dataset 4 and 5 respectively).
    CONCLUSIONS: We developed, validated, and benchmarked a deep-learning computer-aided system for primary detection of neoplasia in patients with BE. The system detected neoplasia with high accuracy and near-perfect delineation performance.
    Keywords:  Barrett surveillance; artificial intelligence; esophageal cancer; machine learning
    DOI:  https://doi.org/10.1053/j.gastro.2019.11.030
  12. Acta Ophthalmol. 2019 Nov 26.
       PURPOSE: To validate the performance of a commercially available, CE-certified deep learning (DL) system, RetCAD v.1.3.0 (Thirona, Nijmegen, The Netherlands), for the joint automatic detection of diabetic retinopathy (DR) and age-related macular degeneration (AMD) in colour fundus (CF) images on a dataset with mixed presence of eye diseases.
    METHODS: Evaluation of joint detection of referable DR and AMD was performed on a DR-AMD dataset with 600 images acquired during routine clinical practice, containing referable and non-referable cases of both diseases. Each image was graded for DR and AMD by an experienced ophthalmologist to establish the reference standard (RS), and by four independent observers for comparison with human performance. Validation was furtherly assessed on Messidor (1200 images) for individual identification of referable DR, and the Age-Related Eye Disease Study (AREDS) dataset (133 821 images) for referable AMD, against the corresponding RS.
    RESULTS: Regarding joint validation on the DR-AMD dataset, the system achieved an area under the ROC curve (AUC) of 95.1% for detection of referable DR (SE = 90.1%, SP = 90.6%). For referable AMD, the AUC was 94.9% (SE = 91.8%, SP = 87.5%). Average human performance for DR was SE = 61.5% and SP = 97.8%; for AMD, SE = 76.5% and SP = 96.1%. Regarding detection of referable DR in Messidor, AUC was 97.5% (SE = 92.0%, SP = 92.1%); for referable AMD in AREDS, AUC was 92.7% (SE = 85.8%, SP = 86.0%).
    CONCLUSION: The validated system performs comparably to human experts at simultaneous detection of DR and AMD. This shows that DL systems can facilitate access to joint screening of eye diseases and become a quick and reliable support for ophthalmological experts.
    Keywords:  age-related macular degeneration; automated detection; deep learning; diabetic retinopathy; observer study; validation
    DOI:  https://doi.org/10.1111/aos.14306
  13. J Oncol. 2019 ;2019 6153041
      The term "artificial intelligence" (AI) includes computational algorithms that can perform tasks considered typical of human intelligence, with partial to complete autonomy, to produce new beneficial outputs from specific inputs. The development of AI is largely based on the introduction of artificial neural networks (ANN) that allowed the introduction of the concepts of "computational learning models," machine learning (ML) and deep learning (DL). AI applications appear promising for radiology scenarios potentially improving lesion detection, segmentation, and interpretation with a recent application also for interventional radiology (IR) practice, including the ability of AI to offer prognostic information to both patients and physicians about interventional oncology procedures. This article integrates evidence-reported literature and experience-based perceptions to assist not only residents and fellows who are training in interventional radiology but also practicing colleagues who are approaching to locoregional mini-invasive treatments.
    DOI:  https://doi.org/10.1155/2019/6153041
  14. J Otolaryngol Head Neck Surg. 2019 Nov 26. 48(1): 66
       BACKGROUND: Otologic diseases are often difficult to diagnose accurately for primary care providers. Deep learning methods have been applied with great success in many areas of medicine, often outperforming well trained human observers. The aim of this work was to develop and evaluate an automatic software prototype to identify otologic abnormalities using a deep convolutional neural network.
    MATERIAL AND METHODS: A database of 734 unique otoscopic images of various ear pathologies, including 63 cerumen impactions, 120 tympanostomy tubes, and 346 normal tympanic membranes were acquired. 80% of the images were used for the training of a convolutional neural network and the remaining 20% were used for algorithm validation. Image augmentation was employed on the training dataset to increase the number of training images. The general network architecture consisted of three convolutional layers plus batch normalization and dropout layers to avoid over fitting.
    RESULTS: The validation based on 45 datasets not used for model training revealed that the proposed deep convolutional neural network is capable of identifying and differentiating between normal tympanic membranes, tympanostomy tubes, and cerumen impactions with an overall accuracy of 84.4%.
    CONCLUSION: Our study shows that deep convolutional neural networks hold immense potential as a diagnostic adjunct for otologic disease management.
    Keywords:  Artificial intelligence; Automated; Deep learning; Machine learning; Neural network; Otoscopy
    DOI:  https://doi.org/10.1186/s40463-019-0389-9
  15. Genes (Basel). 2019 Nov 27. pii: E978. [Epub ahead of print]10(12):
      The amount of data collected and managed in (bio)medicine is ever-increasing. Thus, there is a need to rapidly and efficiently collect, analyze, and characterize all this information. Artificial intelligence (AI), with an emphasis on deep learning, holds great promise in this area and is already being successfully applied to basic research, diagnosis, drug discovery, and clinical trials. Rare diseases (RDs), which are severely underrepresented in basic and clinical research, can particularly benefit from AI technologies. Of the more than 7000 RDs described worldwide, only 5% have a treatment. The ability of AI technologies to integrate and analyze data from different sources (e.g., multi-omics, patient registries, and so on) can be used to overcome RDs' challenges (e.g., low diagnostic rates, reduced number of patients, geographical dispersion, and so on). Ultimately, RDs' AI-mediated knowledge could significantly boost therapy development. Presently, there are AI approaches being used in RDs and this review aims to collect and summarize these advances. A section dedicated to congenital disorders of glycosylation (CDG), a particular group of orphan RDs that can serve as a potential study model for other common diseases and RDs, has also been included.
    Keywords:  artificial intelligence; big data; congenital disorders of glycosylation; diagnosis; drug repurposing; machine learning; personalized medicine; rare diseases
    DOI:  https://doi.org/10.3390/genes10120978
  16. Nature. 2019 Nov;575(7784): 607-617
      Guided by brain-like 'spiking' computational frameworks, neuromorphic computing-brain-inspired computing for machine intelligence-promises to realize artificial intelligence while reducing the energy requirements of computing platforms. This interdisciplinary field began with the implementation of silicon circuits for biological neural routines, but has evolved to encompass the hardware implementation of algorithms with spike-based encoding and event-driven representations. Here we provide an overview of the developments in neuromorphic computing for both algorithms and hardware and highlight the fundamentals of learning and hardware frameworks. We discuss the main challenges and the future prospects of neuromorphic computing, with emphasis on algorithm-hardware codesign.
    DOI:  https://doi.org/10.1038/s41586-019-1677-2
  17. BMJ Open. 2019 Nov 28. 9(11): e032703
       OBJECTIVES: We aimed to test whether or not adding (1) nutrition predictor variables and/or (2) using machine learning models improves cardiovascular death prediction versus standard Cox models without nutrition predictor variables.
    DESIGN: Retrospective study.
    SETTING: Six waves of Survey (NHANES) data collected from 1999 to 2011 linked to the National Death Index (NDI).
    PARTICIPANTS: 29 390 participants were included in the training set for model derivation and 12 600 were included in the test set for model evaluation. Our study sample was approximately 20% black race and 25% Hispanic ethnicity.
    PRIMARY AND SECONDARY OUTCOME MEASURES: Time from NHANES interview until the minimum of time of cardiovascular death or censoring.
    RESULTS: A standard risk model excluding nutrition data overestimated risk nearly two-fold (calibration slope of predicted vs true risk: 0.53 (95% CI: 0.50 to 0.55)) with moderate discrimination (C-statistic: 0.87 (0.86 to 0.89)). Nutrition data alone failed to improve performance while machine learning alone improved calibration to 1.18 (0.92 to 1.44) and discrimination to 0.91 (0.90 to 0.92). Both together substantially improved calibration (slope: 1.01 (0.76 to 1.27)) and discrimination (C-statistic: 0.93 (0.92 to 0.94)).
    CONCLUSION: Our results indicate that the inclusion of nutrition data with available machine learning algorithms can substantially improve cardiovascular risk prediction.
    Keywords:  cardiovascular disease; machine learning; nutrition; risk prediction
    DOI:  https://doi.org/10.1136/bmjopen-2019-032703