bims-arihec Biomed News
on Artificial Intelligence in Healthcare
Issue of 2019‒10‒20
nineteen papers selected by
Céline Bélanger
Cogniges Inc.


  1. Int J Environ Res Public Health. 2019 Oct 11. pii: E3847. [Epub ahead of print]16(20):
      The purpose of this descriptive research paper is to initiate discussions on the use of innovative technologies and their potential to support the research and development of pan-Canadian monitoring and surveillance activities associated with environmental impacts on health and within the health system. Its primary aim is to provide a review of disruptive technologies and their current uses in the environment and in healthcare. Drawing on extensive experience in population-level surveillance through the use of technology, knowledge from prior projects in the field, and conducting a review of the technologies, this paper is meant to serve as the initial steps toward a better understanding of the research area. In doing so, we hope to be able to better assess which technologies might best be leveraged to advance this unique intersection of health and environment. This paper first outlines the current use of technologies at the intersection of public health and the environment, in particular, Artificial Intelligence (AI), Blockchain, and the Internet of Things (IoT). The paper provides a description for each of these technologies, along with a summary of their current applications, and a description of the challenges one might face with adopting them. Thereafter, a high-level reference architecture, that addresses the challenges of the described technologies and could potentially be incorporated into the pan-Canadian surveillance system, is conceived and presented.
    Keywords:  IoT; artificial intelligence; blockchain; climate change; disruptive technologies; environment; global health; reference architecture; surveillance system
    DOI:  https://doi.org/10.3390/ijerph16203847
  2. Nurs Leadersh (Tor Ont). 2019 Jun;pii: cjnl.2019.25963. [Epub ahead of print]32(2): 31-45
      The rapid integration of artificial intelligence (AI) into healthcare delivery has not only provided a glimpse into an enhanced digital future but also raised significant concerns about the social and ethical implications of this evolution. Nursing leaders have a critical role to play in advocating for the just and effective use of AI health solutions. To fulfill this responsibility, nurses need information on the widespread reach of AI and, perhaps more importantly, how the development, deployment and evaluation of these technologies can be influenced.
    DOI:  https://doi.org/10.12927/cjnl.2019.25963
  3. Br J Anaesth. 2019 Oct 15. pii: S0007-0912(19)30646-4. [Epub ahead of print]
      BACKGROUND: Rapid, preoperative identification of patients with the highest risk for medical complications is necessary to ensure that limited infrastructure and human resources are directed towards those most likely to benefit. Existing risk scores either lack specificity at the patient level or utilise the American Society of Anesthesiologists (ASA) physical status classification, which requires a clinician to review the chart.METHODS: We report on the use of machine learning algorithms, specifically random forests, to create a fully automated score that predicts postoperative in-hospital mortality based solely on structured data available at the time of surgery. Electronic health record data from 53 097 surgical patients (2.01% mortality rate) who underwent general anaesthesia between April 1, 2013 and December 10, 2018 in a large US academic medical centre were used to extract 58 preoperative features.
    RESULTS: Using a random forest classifier we found that automatically obtained preoperative features (area under the curve [AUC] of 0.932, 95% confidence interval [CI] 0.910-0.951) outperforms Preoperative Score to Predict Postoperative Mortality (POSPOM) scores (AUC of 0.660, 95% CI 0.598-0.722), Charlson comorbidity scores (AUC of 0.742, 95% CI 0.658-0.812), and ASA physical status (AUC of 0.866, 95% CI 0.829-0.897). Including the ASA physical status with the preoperative features achieves an AUC of 0.936 (95% CI 0.917-0.955).
    CONCLUSIONS: This automated score outperforms the ASA physical status score, the Charlson comorbidity score, and the POSPOM score for predicting in-hospital mortality. Additionally, we integrate this score with a previously published postoperative score to demonstrate the extent to which patient risk changes during the perioperative period.
    Keywords:  electronic health record; hospital mortality; machine learning; perioperative outcome; risk assessment
    DOI:  https://doi.org/10.1016/j.bja.2019.07.030
  4. BMC Public Health. 2019 Oct 15. 19(1): 1288
      BACKGROUND: Human activity and the interaction between health conditions and activity is a critical part of understanding the overall function of individuals. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) models function as all aspects of an individual's interaction with the world, including organismal concepts such as individual body structures, functions, and pathologies, as well as the outcomes of the individual's interaction with their environment, referred to as activity and participation. Function, particularly activity and participation outcomes, is an important indicator of health at both the level of an individual and the population level, as it is highly correlated with quality of life and a critical component of identifying resource needs. Since it reflects the cumulative impact of health conditions on individuals and is not disease specific, its use as a health indicator helps to address major barriers to holistic, patient-centered care that result from multiple, and often competing, disease specific interventions. While the need for better information on function has been widely endorsed, this has not translated into its routine incorporation into modern health systems.PURPOSE: We present the importance of capturing information on activity as a core component of modern health systems and identify specific steps and analytic methods that can be used to make it more available to utilize in improving patient care. We identify challenges in the use of activity and participation information, such as a lack of consistent documentation and diversity of data specificity and representation across providers, health systems, and national surveys. We describe how activity and participation information can be more effectively captured, and how health informatics methodologies, including natural language processing (NLP), can enable automatically locating, extracting, and organizing this information on a large scale, supporting standardization and utilization with minimal additional provider burden. We examine the analytic requirements and potential challenges of capturing this information with informatics, and describe how data-driven techniques can combine with common standards and documentation practices to make activity and participation information standardized and accessible for improving patient care.
    RECOMMENDATIONS: We recommend four specific actions to improve the capture and analysis of activity and participation information throughout the continuum of care: (1) make activity and participation annotation standards and datasets available to the broader research community; (2) define common research problems in automatically processing activity and participation information; (3) develop robust, machine-readable ontologies for function that describe the components of activity and participation information and their relationships; and (4) establish standards for how and when to document activity and participation status during clinical encounters. We further provide specific short-term goals to make significant progress in each of these areas within a reasonable time frame.
    Keywords:  Clinical informatics; Disability evaluation; Electronic health records; Health informatics; Natural language processing; Public health informatics
    DOI:  https://doi.org/10.1186/s12889-019-7630-3
  5. PLoS One. 2019 ;14(10): e0223318
      BACKGROUND: Timely data is key to effective public health responses to epidemics. Drug overdose deaths are identified in surveillance systems through ICD-10 codes present on death certificates. ICD-10 coding takes time, but free-text information is available on death certificates prior to ICD-10 coding. The objective of this study was to develop a machine learning method to classify free-text death certificates as drug overdoses to provide faster drug overdose mortality surveillance.METHODS: Using 2017-2018 Kentucky death certificate data, free-text fields were tokenized and features were created from these tokens using natural language processing (NLP). Word, bigram, and trigram features were created as well as features indicating the part-of-speech of each word. These features were then used to train machine learning classifiers on 2017 data. The resulting models were tested on 2018 Kentucky data and compared to a simple rule-based classification approach. Documented code for this method is available for reuse and extensions: https://github.com/pjward5656/dcnlp.
    RESULTS: The top scoring machine learning model achieved 0.96 positive predictive value (PPV) and 0.98 sensitivity for an F-score of 0.97 in identification of fatal drug overdoses on test data. This machine learning model achieved significantly higher performance for sensitivity (p<0.001) than the rule-based approach. Additional feature engineering may improve the model's prediction. This model can be deployed on death certificates as soon as the free-text is available, eliminating the time needed to code the death certificates.
    CONCLUSION: Machine learning using natural language processing is a relatively new approach in the context of surveillance of health conditions. This method presents an accessible application of machine learning that improves the timeliness of drug overdose mortality surveillance. As such, it can be employed to inform public health responses to the drug overdose epidemic in near-real time as opposed to several weeks following events.
    DOI:  https://doi.org/10.1371/journal.pone.0223318
  6. Chem Res Toxicol. 2019 Oct 18.
      Drug toxicity evaluation is an essential process of drug development as it is reportedly responsible for the attrition of approximately 30% of drug candidates. The rapid increase in the number and types of large toxicology datasets together with the advances in computational methods may be used to improve many steps in drug safety evaluation. The development of in silico models to screen and understand mechanisms of drug toxicity may be particularly beneficial in the early stages of drug development where early toxicity assessment can most reduce expenses and labor time. To facilitate this, machine learning methods have been employed to evaluate drug toxicity but are often limited by small and less diverse datasets. Recent advances in machine learning methods together with the rapid increase in big toxicity data such as molecular descriptors, toxicogenomics, and high-throughput bioactivity data may help alleviate some current challenges. In this article, the most common machine learning methods used in toxicity assessment are reviewed together with examples of toxicity studies that have used machine learning methodology. Furthermore, a comprehensive overview of the different types of toxicity tools and datasets available to build in silico toxicity prediction models has been provided to give an overview of the current big toxicity data landscape and highlight opportunities and challenges related to them.
    DOI:  https://doi.org/10.1021/acs.chemrestox.9b00227
  7. Artif Intell Med. 2019 Sep;pii: S0933-3657(19)30108-3. [Epub ahead of print]100 101706
      Artificial intelligence (AI) will pave the way to a new era in medicine. However, currently available AI systems do not interact with a patient, e.g., for anamnesis, and thus are only used by the physicians for predictions in diagnosis or prognosis. However, these systems are widely used, e.g., in diabetes or cancer prediction. In the current study, we developed an AI that is able to interact with a patient (virtual doctor) by using a speech recognition and speech synthesis system and thus can autonomously interact with the patient, which is particularly important for, e.g., rural areas, where the availability of primary medical care is strongly limited by low population densities. As a proof-of-concept, the system is able to predict type 2 diabetes mellitus (T2DM) based on non-invasive sensors and deep neural networks. Moreover, the system provides an easy-to-interpret probability estimation for T2DM for a given patient. Besides the development of the AI, we further analyzed the acceptance of young people for AI in healthcare to estimate the impact of such a system in the future.
    Keywords:  Artificial intelligence; Deep learning; Diabetes; Diagnostics; E-health; Machine learning
    DOI:  https://doi.org/10.1016/j.artmed.2019.101706
  8. J Dermatolog Treat. 2019 Oct 18. 1-52
      Background Software systems using artificial intelligence for medical purposes have been developed in recent years. The success of deep neural networks (DNN) in 2012 in the image recognition challenge ImageNet LSVRC 2010 fuelled expectations of the potential for using such systems in dermatology. Objective To evaluate the ways in which machine learning has been utilised in dermatology to date and provide an overview of the findings in current literature on the subject. Methods We conducted a systematic review of existing literature, identifying the literature through a systematic search of the PubMed database. Two doctors assessed screening and eligibility with respect to pre-determined inclusion and exclusion criteria. Results 2,175 publications were identified, and 64 publications were included. We identified eight major categories where machine learning tools were tested in dermatology. Most systems involved image recognition tools that were primarily aimed at binary classification of malignant melanoma (MM). Short system descriptions and results of all included systems are presented in tables. Conclusion We present a complete overview of artificial intelligence implemented in dermatology. Impressive outcomes were reported in all of the identified eight categories, but head-to-head comparison proved difficult. The many areas of dermatology where we identified machine learning tools indicate the diversity of machine learning.
    Keywords:  Artificial Intelligence; Computer Assisted Diagnostics; Deep Neural Network; Dermatology
    DOI:  https://doi.org/10.1080/09546634.2019.1682500
  9. JAMA Netw Open. 2019 Oct 02. 2(10): e1913436
      Importance: A high proportion of suspicious pigmented skin lesions referred for investigation are benign. Techniques to improve the accuracy of melanoma diagnoses throughout the patient pathway are needed to reduce the pressure on secondary care and pathology services.Objective: To determine the accuracy of an artificial intelligence algorithm in identifying melanoma in dermoscopic images of lesions taken with smartphone and digital single-lens reflex (DSLR) cameras.
    Design, Setting, and Participants: This prospective, multicenter, single-arm, masked diagnostic trial took place in dermatology and plastic surgery clinics in 7 UK hospitals. Dermoscopic images of suspicious and control skin lesions from 514 patients with at least 1 suspicious pigmented skin lesion scheduled for biopsy were captured on 3 different cameras. Data were collected from January 2017 to July 2018. Clinicians and the Deep Ensemble for Recognition of Malignancy, a deterministic artificial intelligence algorithm trained to identify melanoma in dermoscopic images of pigmented skin lesions using deep learning techniques, assessed the likelihood of melanoma. Initial data analysis was conducted in September 2018; further analysis was conducted from February 2019 to August 2019.
    Interventions: Clinician and algorithmic assessment of melanoma.
    Main Outcomes and Measures: Area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of the algorithmic and specialist assessment, determined using histopathology diagnosis as the criterion standard.
    Results: The study population of 514 patients included 279 women (55.7%) and 484 white patients (96.8%), with a mean (SD) age of 52.1 (18.6) years. A total of 1550 images of skin lesions were included in the analysis (551 [35.6%] biopsied lesions; 999 [64.4%] control lesions); 286 images (18.6%) were used to train the algorithm, and a further 849 (54.8%) images were missing or unsuitable for analysis. Of the biopsied lesions that were assessed by the algorithm and specialists, 125 (22.7%) were diagnosed as melanoma. Of these, 77 (16.7%) were used for the primary analysis. The algorithm achieved an AUROC of 90.1% (95% CI, 86.3%-94.0%) for biopsied lesions and 95.8% (95% CI, 94.1%-97.6%) for all lesions using iPhone 6s images; an AUROC of 85.8% (95% CI, 81.0%-90.7%) for biopsied lesions and 93.8% (95% CI, 91.4%-96.2%) for all lesions using Galaxy S6 images; and an AUROC of 86.9% (95% CI, 80.8%-93.0%) for biopsied lesions and 91.8% (95% CI, 87.5%-96.1%) for all lesions using DSLR camera images. At 100% sensitivity, the algorithm achieved a specificity of 64.8% with iPhone 6s images. Specialists achieved an AUROC of 77.8% (95% CI, 72.5%-81.9%) and a specificity of 69.9%.
    Conclusions and Relevance: In this study, the algorithm demonstrated an ability to identify melanoma from dermoscopic images of selected lesions with an accuracy similar to that of specialists.
    DOI:  https://doi.org/10.1001/jamanetworkopen.2019.13436
  10. Curr Opin Neurol. 2019 Oct 10.
      PURPOSE OF REVIEW: To discuss recent applications of artificial intelligence within the field of neuro-oncology and highlight emerging challenges in integrating artificial intelligence within clinical practice.RECENT FINDINGS: In the field of image analysis, artificial intelligence has shown promise in aiding clinicians with incorporating an increasing amount of data in genomics, detection, diagnosis, classification, risk stratification, prognosis, and treatment response. Artificial intelligence has also been applied in epigenetics, pathology, and natural language processing.
    SUMMARY: Although nascent, applications of artificial intelligence within neuro-oncology show significant promise. Artificial intelligence algorithms will likely improve our understanding of brain tumors and help drive future innovations in neuro-oncology.
    DOI:  https://doi.org/10.1097/WCO.0000000000000761
  11. Clin Gastroenterol Hepatol. 2019 Oct 14. pii: S1542-3565(19)31112-7. [Epub ahead of print]
      BACKGROUND AND AIMS: Physician adherence to published colonoscopy surveillance guidelines varies. We aimed to develop and validate an automated clinical decision support algorithm that can extract procedure and pathology data from the electronic medical record (EMR) and generate surveillance intervals congruent with guidelines, which might increase physician adherence.METHODS: We constructed a clinical decision support (CDS) algorithm based on guidelines from the United States Multi-Society Task Force on Colorectal Cancer. We used a randomly generated validation dataset of 300 outpatient colonoscopies performed at the Cleveland Clinic from 2012 through 2016 to evaluate the accuracy of extracting data from reports stored in the EMR using natural language processing (NLP). We compared colonoscopy follow-up recommendations from the CDS algorithm, endoscopists, and task force guidelines. Using a testing dataset of 2439 colonoscopies, we compared endoscopist recommendations with those of the algorithm.
    RESULTS: Manual review of the validation dataset confirmed the NLP program accurately extracted procedure and pathology data for all cases. Recommendations made by endoscopists and the CDS algorithm were guideline-concordant in 62% and 99% of cases respectively. Discrepant recommendations by endoscopists were earlier than recommended in 94% of the cases. In the testing dataset, 69% of endoscopist and NLP-CDS algorithm recommendations were concordant. Discrepant recommendations by endoscopists were earlier than guidelines in 91% of cases.
    CONCLUSIONS: We constructed and tested an automated CDS algorithm that can use NLP-extracted data from the EMR to generate follow-up colonoscopy surveillance recommendations based on published guidelines.
    Keywords:  USMSTF; management; quality improvement; software
    DOI:  https://doi.org/10.1016/j.cgh.2019.10.013
  12. Artif Intell Med. 2019 Aug;pii: S0933-3657(17)30178-1. [Epub ahead of print]99 101704
      INTRODUCTION: Machine learning capability holds promise to inform disease models, the discovery and development of novel disease modifying therapeutics and prevention strategies in psychiatry. Herein, we provide an introduction on how machine learning/Artificial Intelligence (AI) may instantiate such capabilities, as well as provide rationale for its application to psychiatry in both research and clinical ecosystems.METHODS: Databases PubMed and PsycINFO were searched from 1966 to June 2016 for keywords:Big Data, Machine Learning, Precision Medicine, Artificial Intelligence, Mental Health, Mental Disease, Psychiatry, Data Mining, RDoC, and Research Domain Criteria. Articles selected for review were those that were determined to be aligned with the objective of this particular paper.
    RESULTS: Results indicate that AI is a viable option to build useful predictors of outcome while offering objective and comparable accuracy metrics, a unique opportunity, particularly in mental health research. The approach has also consistently brought notable insight into disease models through processing the vast amount of already available multi-domain, semi-structured medical data. The opportunity for AI in psychiatry, in addition to disease-model refinement, is in characterizing those at risk, and it is likely also relevant to personalizing and discovering therapeutics.
    CONCLUSIONS: Machine learning currently provides an opportunity to parse disease models in complex, multi-factorial disease states (e.g. mental disorders) and could possibly inform treatment selection with existing therapies and provide bases for domain-based therapeutic discovery.
    Keywords:  ADHD; AI; Algorithms; Alzheimer; Big data; DSM-5. Schizophrenia; Data mining; Decision trees; Depression; IBM Watson; MRI; Machine learning; Mental disease; Mental health; Neuro networking; Precision medicine; Psychiatry; RDoC; Random forests; Research domain criteria; Support vector machines; fMRI
    DOI:  https://doi.org/10.1016/j.artmed.2019.101704
  13. JMIR Ment Health. 2019 Oct 18. 6(10): e14166
      BACKGROUND: The use of conversational agent interventions (including chatbots and robots) in mental health is growing at a fast pace. Recent existing reviews have focused exclusively on a subset of embodied conversational agent interventions despite other modalities aiming to achieve the common goal of improved mental health.OBJECTIVE: This study aimed to review the use of conversational agent interventions in the treatment of mental health problems.
    METHODS: We performed a systematic search using relevant databases (MEDLINE, EMBASE, PsycINFO, Web of Science, and Cochrane library). Studies that reported on an autonomous conversational agent that simulated conversation and reported on a mental health outcome were included.
    RESULTS: A total of 13 studies were included in the review. Among them, 4 full-scale randomized controlled trials (RCTs) were included. The rest were feasibility, pilot RCTs and quasi-experimental studies. Interventions were diverse in design and targeted a range of mental health problems using a wide variety of therapeutic orientations. All included studies reported reductions in psychological distress postintervention. Furthermore, 5 controlled studies demonstrated significant reductions in psychological distress compared with inactive control groups. In addition, 3 controlled studies comparing interventions with active control groups failed to demonstrate superior effects. Broader utility in promoting well-being in nonclinical populations was unclear.
    CONCLUSIONS: The efficacy and acceptability of conversational agent interventions for mental health problems are promising. However, a more robust experimental design is required to demonstrate efficacy and efficiency. A focus on streamlining interventions, demonstrating equivalence to other treatment modalities, and elucidating mechanisms of action has the potential to increase acceptance by users and clinicians and maximize reach.
    Keywords:  artificial intelligence; chatbot; conversational agent; digital health; mental health; psychiatry; stress, pychological; therapy, computer-assisted
    DOI:  https://doi.org/10.2196/14166
  14. Digit Health. 2019 Jan-Dec;5:5 2055207619880676
      Objective: The objective of this study was to assess whether a version of the Smoke Free app with a supportive chatbot powered by artificial intelligence (versus a version without the chatbot) led to increased engagement and short-term quit success.Methods: Daily or non-daily smokers aged ≥18 years who purchased the 'pro' version of the app and set a quit date were randomly assigned (unequal allocation) to receive the app with or without the chatbot. The outcomes were engagement (i.e. total number of logins over the study period) and self-reported abstinence at a one-month follow-up. Unadjusted and adjusted negative binomial and logistic regression models were fitted to estimate incidence rate ratios (IRRs) and odds ratios (ORs) for the associations of interest.
    Results: A total of 57,214 smokers were included (intervention: 9.3% (5339); control: 90.7% (51,875). The app with the chatbot compared with the standard version led to a 101% increase in engagement (IRRadj = 2.01, 95% confidence interval (CI) = 1.92-2.11, p < .001). The one-month follow-up rate was 10.6% (intervention: 19.9% (1,061/5,339); control: 9.7% (5,050/51,875). Smokers allocated to the intervention had greater odds of quit success (missing equals smoking: 844/5,339 vs. 3,704/51,875, ORadj = 2.38, 95% CI = 2.19-2.58, p < .001; follow-up only: 844/1,061 vs. 3,704/5,050, ORadj = 1.36, 95% CI = 1.16-1.61, p < .001).
    Conclusion: The addition of a supportive chatbot to a popular smoking cessation app more than doubled user engagement. In view of very low follow-up rates, there is low quality evidence that the addition also increased self-reported smoking cessation.
    Keywords:  Chatbot; engagement; mHealth; smartphone apps; smoking cessation
    DOI:  https://doi.org/10.1177/2055207619880676
  15. J Am Geriatr Soc. 2019 Oct 16.
      BACKGROUND/OBJECTIVES: Many older adults wish to age in place, and voice-controlled intelligent personal assistants (VIPAs; eg, Amazon Echo and Google Home) potentially could support unmet home needs. No prior studies have researched the real-world use of VIPAs among older adults. We sought to explore how older adults and caregivers utilize VIPAs.DESIGN/MEASUREMENT: Retrospective review of all verified purchase reviews of the Amazon Echo posted on Amazon.com between January 2015 and January 2018, with filtering for health-related older adult key words. Open-ended reviews were qualitatively analyzed to identify relevant themes.
    RESULTS: On retrieval, there were 73 549 reviews; and with subsequent key word filtering, 125 total reviews were subsequently analyzed. Five major themes were identified: (1) entertainment ("For two very senior citizens…we have really had fun with Echo. She tells us jokes, answers questions, plays music.); (2) companionship ("A senior living alone…I now have Alex to talk to."); (3) home control; (4) reminders ("I needed something that would provide me with information I couldn't remember well, such as the date, day, or my schedule…I highly recommend for anyone with memory challenges"); and (5) emergency communication. Several felt it reduced burdening caregivers. "…You also feel guilt from fear of overburdening your caregivers. Alexa has alleviated much of this." Specifically, caregivers found that: "By making playlists of songs from her youth whoever is providing care, family or professional caregiver, can simply request the right song for the moment in order to sooth, redirect, or distract Mom." Alternatively, negative reviewers felt the VIPA misunderstood them or could not adequately respond to specific health questions.
    CONCLUSION: VIPAs are a low-cost artificial intelligence that can support older adults in the home and potentially reduce caregiver burden. This study is the first to explore VIPA use among older adults, and further studies are needed to examine the direct benefits of VIPAs in supporting aging in place.
    Keywords:  aging in place; artificial intelligencecaregivers; technology
    DOI:  https://doi.org/10.1111/jgs.16217
  16. Am J Manag Care. 2019 Oct 01. 25(10): e310-e315
      OBJECTIVES: Current models for patient risk prediction rely on practitioner expertise and domain knowledge. This study presents a deep learning model-a type of machine learning that does not require human inputs-to analyze complex clinical and financial data for population risk stratification.STUDY DESIGN: A comparative predictive analysis of deep learning versus other popular risk prediction modeling strategies using medical claims data from a cohort of 112,641 pediatric accountable care organization members.
    METHODS: "Skip-Gram," an unsupervised deep learning approach that uses neural networks for prediction modeling, used data from 2014 and 2015 to predict the risk of hospitalization in 2016. The area under the curve (AUC) of the deep learning model was compared with that of both the Clinical Classifications Software and the commercial DxCG Intelligence predictive risk models, each with and without demographic and utilization features. We then calculated costs for patients in the top 1% and 5% of hospitalization risk identified by each model.
    RESULTS: The deep learning model performed the best across 6 predictive models, with an AUC of 75.1%. The top 1% of members selected by the deep learning model had a combined healthcare cost $5 million higher than that of the group identified by the DxCG Intelligence model.
    CONCLUSIONS: The deep learning model outperforms the traditional risk models in prospective hospitalization prediction. Thus, deep learning may improve the ability of managed care organizations to perform predictive modeling of financial risk, in addition to improving the accuracy of risk stratification for population health management activities.
  17. Int J Med Inform. 2019 Sep 27. pii: S1386-5056(19)30742-7. [Epub ahead of print]132 103981
      OBJECTIVES: To determine the effect of a domain-specific ontology and machine learning-driven user interfaces on the efficiency and quality of documentation of presenting problems (chief complaints) in the emergency department (ED).METHODS: As part of a quality improvement project, we simultaneously implemented three interventions: a domain-specific ontology, contextual autocomplete, and top five suggestions. Contextual autocomplete is a user interface that ranks concepts by their predicted probability which helps nurses enter data about a patient's presenting problems. Nurses were also given a list of top five suggestions to choose from. These presenting problems were represented using a consensus ontology mapped to SNOMED CT. Predicted probabilities were calculated using a previously derived model based on triage vital signs and a brief free text note. We evaluated the percentage and quality of structured data captured using a mixed methods retrospective before-and-after study design.
    RESULTS: A total of 279,231 consecutive patient encounters were analyzed. Structured data capture improved from 26.2% to 97.2% (p < 0.0001). During the post-implementation period, presenting problems were more complete (3.35 vs 3.66; p = 0.0004) and higher in overall quality (3.38 vs. 3.72; p = 0.0002), but showed no difference in precision (3.59 vs. 3.74; p = 0.1). Our system reduced the mean number of keystrokes required to document a presenting problem from 11.6 to 0.6 (p < 0.0001), a 95% improvement.
    DISCUSSION: We demonstrated a technique that captures structured data on nearly all patients. We estimate that our system reduces the number of man-hours required annually to type presenting problems at our institution from 92.5 h to 4.8 h.
    CONCLUSION: Implementation of a domain-specific ontology and machine learning-driven user interfaces resulted in improved structured data capture, ontology usage compliance, and data quality.
    Keywords:  Artificial intelligence; Contextual autocomplete; Machine learning; Ontology; User-computer interface
    DOI:  https://doi.org/10.1016/j.ijmedinf.2019.103981
  18. Medicine (Baltimore). 2019 Oct;98(42): e17596
      To date, consumer health tools available over the web suffer from serious limitations that lead to low quality health- related information. While health data in our world are abundant, access to it is limited because of liability and privacy constraints.The objective of the present study was to develop and evaluate an algorithm-based tool which aims at providing the public with reliable, data-driven information based and personalized information regarding their symptoms, to help them and their physicians to make better informed decisions, based on statistics describing "people like you", who have experienced similar symptoms.We studied anonymized medical records of Maccabi Health Care. The data were analyzed by employing machine learning methodology and Natural Language Processing (NLP) tools. The NLP tools were developed to extract information from unstructured free-text written by Maccabi's physicians.Using machine learning and NLP on over 670 million notes of patients' visits with Maccabi physicians accrued since 1993, we developed predictors for medical conditions based on patterns of symptoms and personal characteristics.The algorithm was launched for Maccabi insured members on January 7, 2018 and for members of Integrity Family Care program in Alabama on May 1, 2018.The App. invites the user to describe her/ his main symptom or several symptoms, and this prompts a series of questions along the path developed by the algorithm, based on the analysis of 70 million patients' visits to their physicians.Users started dialogues with 225 different types of symptoms, answering on average 22 questions before seeing how people similar to them were diagnosed. Users usually described between 3 and 4 symptoms (mean 3.2) in the health dialogue.In response to the question "conditions verified by your doctor", 82.4% of responders (895/1085) in Maccabi reported that the diagnoses suggested by K's health dialogues were in agreement with their doctor's final diagnosis. In Integrity Health Services, 85.4% of responders (111/130) were in agreement with the physicians' diagnosis.While the program achieves very high approval rates by its users, its primary achievement is the 85% accuracy in identifying the most likely diagnosis, with the gold standard being the final diagnosis made by the personal physician in each individual case. Moreover, the machine learning algorithm continues to update itself with the feedback given by users.
    DOI:  https://doi.org/10.1097/MD.0000000000017596
  19. Int J Med Inform. 2019 Sep 25. pii: S1386-5056(19)30345-4. [Epub ahead of print]132 103976
      INTRODUCTION: There is increasing demand for access to medical information via patients' portals. However, one of the challenges towards widespread utilisation of such service is maintaining the security of those portals. Recent reports show an alarming increase in cyber-attacks using crawlers. These software programs crawl web pages and are capable of executing various commands such as attacking web servers, cracking passwords, harvesting users' personal information, and testing the vulnerability of servers. The aim of this research is to develop a new effective model for detecting malicious crawlers based on their navigational behavior using machine-learning techniques.METHOD: In this research, different methods of crawler detection were investigated. Log files of a sample of compromised web sites were analysed and the best features for the detection of crawlers were extracted. Then after testing and comparing several machine learning algorithms including Support Vector Machine (SVM), Bayesian Network and Decision Tree, the best model was developed using the most appropriate features and its accuracy was evaluated.
    RESULTS: Our analysis showed the SVM-based models can yield higher accuracy (f-measure = 0.97) comparing to Bayesian Network (f-measure = 0.88) and Decision Tree (f-measure = 0.95) and artificial neural network (ANN) (f-measure = 0.87)for detecting malicious crawlers. However, extracting proper features can increase the performance of the SVM (f-measure = 0.98), the Bayesian network (f-measure = 0.94) and the Decision Tree (f-measure = 0.96) and ANN (f-measure = 0.92).
    CONCLUSION: Security concerns are among the potential barriers to widespread utilisation of patient portals. Machine learning algorithms can be accurately used to detect malicious crawlers and enhance the security of sensitive patients' information. Selecting appropriate features for the development of these algorithms can remarkably increase their accuracy.
    Keywords:  Feature extraction; Malicious crawlers; Security of patient portal; Support vector machines
    DOI:  https://doi.org/10.1016/j.ijmedinf.2019.103976