bims-arihec Biomed News
on Artificial intelligence in healthcare
Issue of 2019–12–29
24 papers selected by
Céline Bélanger, Cogniges Inc.



  1. Curr Genet Med Rep. 2019 Dec;7(4): 208-213
       Purpose of Review: We critically evaluate the future potential of machine learning (ML), deep learning (DL), and artificial intelligence (AI) in precision medicine. The goal of this work is to show progress in ML in digital health, to exemplify future needs and trends, and to identify any essential prerequisites of AI and ML for precision health.
    Recent Findings: High-throughput technologies are delivering growing volumes of biomedical data, such as large-scale genome-wide sequencing assays; libraries of medical images; or drug perturbation screens of healthy, developing, and diseased tissue. Multi-omics data in biomedicine is deep and complex, offering an opportunity for data-driven insights and automated disease classification. Learning from these data will open our understanding and definition of healthy baselines and disease signatures. State-of-the-art applications of deep neural networks include digital image recognition, single-cell clustering, and virtual drug screens, demonstrating breadths and power of ML in biomedicine.
    Summary: Significantly, AI and systems biology have embraced big data challenges and may enable novel biotechnology-derived therapies to facilitate the implementation of precision medicine approaches.
    Keywords:  AI; DL; DNN; Deep learning; Digital health; Digital pathology; ML; Machine learning; Multi-omics; Precision medicine; Single-cell transcriptomics; Spatial transcriptomics; Systems biology
    DOI:  https://doi.org/10.1007/s40142-019-00177-4
  2. Spine J. 2019 Dec 23. pii: S1529-9430(19)31144-1. [Epub ahead of print]
       BACKGROUND: Incidental durotomy is a common intraoperative complication during spine surgery with potential implications for postoperative recovery, patient-reported outcomes, length of stay, and costs. To our knowledge, there are no processes available for automated surveillance of incidental durotomy.
    PURPOSE: The purpose of this study was to develop natural language processing (NLP) algorithms for automated detection of incidental durotomies in free-text operative notes of patients undergoing lumbar spine surgery.
    PATIENT SAMPLE: Adult patients 18 years or older undergoing lumbar spine surgery between January 1st, 2000 and June 31st, 2018 at two academic and three community medical centers OUTCOME MEASURES: The primary outcome was defined as intra-operative durotomy recorded in free-text operative notes METHODS: An 80:20 stratified split was undertaken to create training and testing populations. An extreme gradient-boosting NLP algorithm was developed to detect incidental durotomy. Discrimination was assessed via area under receiver-operating curve [AUC-ROC], precision-recall curve and Brier score. Performance of this algorithm was compared to current procedural terminology (CPT) and international classification of diseases (ICD) codes for durotomy.
    RESULTS: Overall, 1000 patients were included in the study and 93 (9.3%) had a recorded incidental durotomy in the free-text operative report. In the independent testing set (n = 200) not used for model development, the NLP algorithm achieved AUC-ROC of 0.99 for detection of durotomy. In comparison, the CPT/ICD codes had AUC-ROC of 0.64. In the testing set, the NLP algorithm detected 16/18 patients with incidental durotomy (sensitivity 0.89) whereas the CPT and ICD codes detected 5/18 (sensitivity 0.28). At a threshold of 0.05, the NLP algorithm had specificity of 0.99, positive predictive value of 0.89, and negative predictive value of 0.99.
    CONCLUSION: Internal validation of the NLP algorithm developed in this study indicates promising results for future NLP applications in spine surgery. Pending external validation, the NLP algorithm developed in this study may be used by entities including national spine registries or hospital quality and safety departments to automate tracking of incidental durotomies.
    Keywords:  artificial intelligence; diagnosis; dural tear; durotomy; machine learning; natural language processing; prediction; spine
    DOI:  https://doi.org/10.1016/j.spinee.2019.12.006
  3. Sensors (Basel). 2019 Dec 22. pii: E89. [Epub ahead of print]20(1):
      Bruxism is a masticatory muscle activity characterized by high prevalence, widespread complications, and serious consequences but without specific guidelines for its diagnosis and treatment. Although occlusal force-based biofeedback therapy is proven to be safe, effective, and with few side effects in improving bruxism, its mechanism and key technologies remain unclear. The purpose of this study was to research a real-time, quantitative, intelligent, and precise force-based biofeedback detection device based on artificial intelligence (AI) algorithms for the diagnosis and treatment of bruxism. Stress sensors were integrated and embedded into a resin-based occlusion stabilization splint by using a layering technique (sandwich method). The sensor system mainly consisted of a pressure signal acquisition module, a main control module, and a server terminal. A machine learning algorithm was leveraged for occlusal force data processing and parameter configuration. This study implemented a sensor prototype system from scratch to fully evaluate each component of the intelligent splint. Experiment results showed reasonable parameter metrics for the sensors system and demonstrated the feasibility of the proposed scheme for bruxism treatment. The intelligent occlusion stabilization splint with a stress sensor system is a promising approach to bruxism diagnosis and treatment.
    Keywords:  artificial intelligence; biofeedback treatment; bruxism; data analysis; engineering; machine learning; occlusal splint; stress sensor system
    DOI:  https://doi.org/10.3390/s20010089
  4. NPJ Digit Med. 2019 ;2 130
      Data is foundational to high-quality artificial intelligence (AI). Given that a substantial amount of clinically relevant information is embedded in unstructured data, natural language processing (NLP) plays an essential role in extracting valuable information that can benefit decision making, administration reporting, and research. Here, we share several desiderata pertaining to development and usage of NLP systems, derived from two decades of experience implementing clinical NLP at the Mayo Clinic, to inform the healthcare AI community. Using a framework, we developed as an example implementation, the desiderata emphasize the importance of a user-friendly platform, efficient collection of domain expert inputs, seamless integration with clinical data, and a highly scalable computing infrastructure.
    Keywords:  Health care; Medical research
    DOI:  https://doi.org/10.1038/s41746-019-0208-8
  5. BMC Med Inform Decis Mak. 2019 Dec 21. 19(1): 281
       BACKGROUND: Supervised machine learning algorithms have been a dominant method in the data mining field. Disease prediction using health data has recently shown a potential application area for these methods. This study ai7ms to identify the key trends among different types of supervised machine learning algorithms, and their performance and usage for disease risk prediction.
    METHODS: In this study, extensive research efforts were made to identify those studies that applied more than one supervised machine learning algorithm on single disease prediction. Two databases (i.e., Scopus and PubMed) were searched for different types of search items. Thus, we selected 48 articles in total for the comparison among variants supervised machine learning algorithms for disease prediction.
    RESULTS: We found that the Support Vector Machine (SVM) algorithm is applied most frequently (in 29 studies) followed by the Naïve Bayes algorithm (in 23 studies). However, the Random Forest (RF) algorithm showed superior accuracy comparatively. Of the 17 studies where it was applied, RF showed the highest accuracy in 9 of them, i.e., 53%. This was followed by SVM which topped in 41% of the studies it was considered.
    CONCLUSION: This study provides a wide overview of the relative performance of different variants of supervised machine learning algorithms for disease prediction. This important information of relative performance can be used to aid researchers in the selection of an appropriate supervised machine learning algorithm for their studies.
    Keywords:  Disease prediction; Machine learning; Medical data; Supervised machine learning algorithm
    DOI:  https://doi.org/10.1186/s12911-019-1004-8
  6. Hum Vaccin Immunother. 2019 Dec 23. 1-8
      Subjects receiving the same vaccine often show different levels of immune responses and some may even present adverse side effects to the vaccine. Systems vaccinology can combine omics data and machine learning techniques to obtain highly predictive signatures of vaccine immunogenicity and reactogenicity. Currently, several machine learning methods are already available to researchers with no background in bioinformatics. Here we described the four main steps to discover markers of vaccine immunogenicity and reactogenicity: (1) Preparing the data; (2) Selecting the vaccinees and relevant genes; (3) Choosing the algorithm; (4) Blind testing your model. With the increasing number of Systems Vaccinology datasets being generated, we expect that the accuracy and robustness of signatures of vaccine reactogenicity and immunogenicity will significantly improve.
    Keywords:  Systems vaccinology; artificial intelligence; machine learning; vaccine immunogenicity; vaccine reactogenicity
    DOI:  https://doi.org/10.1080/21645515.2019.1697110
  7. Rheumatol Adv Pract. 2019 ;3(2): rkz047
       Objective: The purpose of this research was to develop a deep-learning model to assess radiographic finger joint destruction in RA.
    Methods: The model comprises two steps: a joint-detection step and a joint-evaluation step. Among 216 radiographs of 108 patients with RA, 186 radiographs were assigned to the training/validation dataset and 30 to the test dataset. In the training/validation dataset, images of PIP joints, the IP joint of the thumb or MCP joints were manually clipped and scored for joint space narrowing (JSN) and bone erosion by clinicians, and then these images were augmented. As a result, 11 160 images were used to train and validate a deep convolutional neural network for joint evaluation. Three thousand seven hundred and twenty selected images were used to train machine learning for joint detection. These steps were combined as the assessment model for radiographic finger joint destruction. Performance of the model was examined using the test dataset, which was not included in the training/validation process, by comparing the scores assigned by the model and clinicians.
    Results: The model detected PIP joints, the IP joint of the thumb and MCP joints with a sensitivity of 95.3% and assigned scores for JSN and erosion. Accuracy (percentage of exact agreement) reached 49.3-65.4% for JSN and 70.6-74.1% for erosion. The correlation coefficient between scores by the model and clinicians per image was 0.72-0.88 for JSN and 0.54-0.75 for erosion.
    Conclusion: Image processing with the trained convolutional neural network model is promising to assess radiographs in RA.
    Keywords:  artificial intelligence; joint destruction; rheumatoid arthritis
    DOI:  https://doi.org/10.1093/rap/rkz047
  8. J Ultrasound Med. 2019 Dec 24.
       OBJECTIVES: Little is known about optimal deep learning (DL) approaches for point-of-care ultrasound (POCUS) applications. We compared 6 popular DL architectures for POCUS cardiac image classification to determine whether an optimal DL architecture exists for future DL algorithm development in POCUS.
    METHODS: We trained 6 convolutional neural networks (CNNs) with a range of complexities and ages (AlexNet, VGG-16, VGG-19, ResNet50, DenseNet201, and Inception-v4). Each CNN was trained by using images of 5 typical POCUS cardiac views. Images were extracted from 225 publicly available deidentified POCUS cardiac videos. A total of 750,018 individual images were extracted, with 90% used for model training and 10% for cross-validation. The training time and accuracy achieved were tracked. A real-world test of the algorithms was performed on a set of 125 completely new cardiac images. Descriptive statistics, Pearson R values, and κ values were calculated for each CNN.
    RESULTS: Accuracy ranged from 96% to 85.6% correct for the 6 CNNs. VGG-16, one of the oldest and simplest CNNs, performed best at 96% correct with 232 minutes to train (R = 0.97; κ = 0.95; P < .00001). The worst-performing CNN was the newer DenseNet201, with 85.6% accuracy and 429 minutes to train (R = 0.92; κ = 0.82; P < .00001).
    CONCLUSIONS: Six common image classification DL algorithms showed considerable variability in their accuracy and training time when trained and tested on identical data, suggesting that not all will perform optimally for POCUS DL applications. Contrary to well-established accuracies for CNNs, more modern and deeper algorithms yielded poorer results.
    Keywords:  artificial intelligence; deep learning; echo; emergency medicine; emergency ultrasound; point-of-care ultrasound
    DOI:  https://doi.org/10.1002/jum.15206
  9. Cancers (Basel). 2019 Dec 22. pii: E50. [Epub ahead of print]12(1):
      (1) Background: Recently, it has been shown that the extent of resection (EOR) and molecular classification of low-grade gliomas (LGGs) are endowed with prognostic significance. However, a prognostic stratification of patients able to give specific weight to the single parameters able to predict prognosis is still missing. Here, we adopt classic statistics and an artificial intelligence algorithm to define a multiparametric prognostic stratification of grade II glioma patients. (2) Methods: 241 adults who underwent surgery for a supratentorial LGG were included. Clinical, neuroradiological, surgical, histopathological and molecular data were assessed for their ability to predict overall survival (OS), progression-free survival (PFS), and malignant progression-free survival (MPFS). Finally, a decision-tree algorithm was employed to stratify patients. (3) Results: Classic statistics confirmed EOR, pre-operative- and post-operative tumor volumes, Ki67, and the molecular classification as independent predictors of OS, PFS, and MPFS. The decision tree approach provided an algorithm capable of identifying prognostic factors and defining both the cut-off levels and the hierarchy to be used in order to delineate specific prognostic classes with high positive predictive value. Key results were the superior role of EOR on that of molecular class, the importance of second surgery, and the role of different prognostic factors within the three molecular classes. (4) Conclusions: This study proposes a stratification of LGG patients based on the different combinations of clinical, molecular, and imaging data, adopting a supervised non-parametric learning method. If validated in independent case studies, the clinical utility of this innovative stratification approach might be proved.
    Keywords:  MRI data; artificial intelligence; decision trees; extent of resection; grade II glioma; molecular classification; prognosis
    DOI:  https://doi.org/10.3390/cancers12010050
  10. J Laryngol Otol. 2019 Dec 23. 1-4
       OBJECTIVE: Deep learning using convolutional neural networks represents a form of artificial intelligence where computers recognise patterns and make predictions based upon provided datasets. This study aimed to determine if a convolutional neural network could be trained to differentiate the location of the anterior ethmoidal artery as either adhered to the skull base or within a bone 'mesentery' on sinus computed tomography scans.
    METHODS: Coronal sinus computed tomography scans were reviewed by two otolaryngology residents for anterior ethmoidal artery location and used as data for the Google Inception-V3 convolutional neural network base. The classification layer of Inception-V3 was retrained in Python (programming language software) using a transfer learning method to interpret the computed tomography images.
    RESULTS: A total of 675 images from 388 patients were used to train the convolutional neural network. A further 197 unique images were used to test the algorithm; this yielded a total accuracy of 82.7 per cent (95 per cent confidence interval = 77.7-87.8), kappa statistic of 0.62 and area under the curve of 0.86.
    CONCLUSION: Convolutional neural networks demonstrate promise in identifying clinically important structures in functional endoscopic sinus surgery, such as anterior ethmoidal artery location on pre-operative sinus computed tomography.
    Keywords:  Anterior Ethmoidal Artery; Artificial Intelligence; Complication; Deep Learning; Endoscopic Sinus Surgery; Injuries; Skull Base
    DOI:  https://doi.org/10.1017/S0022215119002536
  11. PLoS One. 2019 ;14(12): e0226765
      Among women, breast cancer is a leading cause of death. Breast cancer risk predictions can inform screening and preventative actions. Previous works found that adding inputs to the widely-used Gail model improved its ability to predict breast cancer risk. However, these models used simple statistical architectures and the additional inputs were derived from costly and / or invasive procedures. By contrast, we developed machine learning models that used highly accessible personal health data to predict five-year breast cancer risk. We created machine learning models using only the Gail model inputs and models using both Gail model inputs and additional personal health data relevant to breast cancer risk. For both sets of inputs, six machine learning models were trained and evaluated on the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial data set. The area under the receiver operating characteristic curve metric quantified each model's performance. Since this data set has a small percentage of positive breast cancer cases, we also reported sensitivity, specificity, and precision. We used Delong tests (p < 0.05) to compare the testing data set performance of each machine learning model to that of the Breast Cancer Risk Prediction Tool (BCRAT), an implementation of the Gail model. None of the machine learning models with only BCRAT inputs were significantly stronger than the BCRAT. However, the logistic regression, linear discriminant analysis, and neural network models with the broader set of inputs were all significantly stronger than the BCRAT. These results suggest that relative to the BCRAT, additional easy-to-obtain personal health inputs can improve five-year breast cancer risk prediction. Our models could be used as non-invasive and cost-effective risk stratification tools to increase early breast cancer detection and prevention, motivating both immediate actions like screening and long-term preventative measures such as hormone replacement therapy and chemoprevention.
    DOI:  https://doi.org/10.1371/journal.pone.0226765
  12. Alzheimers Dement (N Y). 2019 ;5 918-925
       Introduction: The study objective was to build a machine learning model to predict incident mild cognitive impairment, Alzheimer's Disease, and related dementias from structured data using administrative and electronic health record sources.
    Methods: A cohort of patients (n = 121,907) and controls (n = 5,307,045) was created for modeling using data within 2 years of patient's incident diagnosis date. Additional cohorts 3-8 years removed from index data are used for prediction. Training cohorts were matched on age, gender, index year, and utilization, and fit with a gradient boosting machine, lightGBM.
    Results: Incident 2-year model quality on a held-out test set had a sensitivity of 47% and area-under-the-curve of 87%. In the 3-year model, the learned labels achieved 24% (71%), which dropped to 15% (72%) in year 8.
    Discussion: The ability of the model to discriminate incident cases of dementia implies that it can be a worthwhile tool to screen patients for trial recruitment and patient management.
    Keywords:  Alzheimer's disease; Gradient boosting machine; Machine learning; Onset of dementia; Prediction
    DOI:  https://doi.org/10.1016/j.trci.2019.10.006
  13. Gastrointest Endosc. 2019 Dec 21. pii: S0016-5107(19)32560-X. [Epub ahead of print]
      
    DOI:  https://doi.org/10.1016/j.gie.2019.12.018
  14. Circ J. 2019 Dec 21.
      Network medicine can advance current medical practice by arising as response to the limitations of a reductionist approach focusing on cardiovascular (CV) diseases as a direct consequence of a single defect. This molecular-bioinformatic approach integrates heterogeneous "omics" data and artificial intelligence to identify a chain of perturbations involving key components of multiple molecular networks that are closely related in the human interactome. The clinical view of the network-based approach is greatly supported by the general law of molecular interconnection governing all biological complex systems. Recent advances in bioinformatics have culminated in numerous quantitative platforms able to identify CV disease modules underlying perturbations of the interactome. This might provide novel insights in CV disease mechanisms as well as putative biomarkers and drug targets. We describe the network-based principles and discuss their application to classifying and treating common CV diseases. We compare the strengths and weaknesses of molecular networks in comparison with the classical current reductionist approach, and remark on the necessity for a two-way approach connecting network medicine with large clinical trials to concretely translate novel insights from bench to bedside.
    Keywords:  Artificial intelligence; Cardiovascular diseases; Network medicine; Personalized therapy; Precision medicine
    DOI:  https://doi.org/10.1253/circj.CJ-19-0879
  15. J Dermatolog Treat. 2019 Dec 23. 1-27
      Background: Automatic skin lesion image identification is of utmost importance to develop a fully automatized computer-aided Skin analysis system. This will be helping the medical practitioners to provide skin lesions disease treatment more efficiently and effectively.Material and Method: In this paper, two image processing techniques for accurate detection of skin lesions have been proposed. In first technique, the optimization of edge detection has been carried out by using a branch of Artificial Intelligence called Nature Inspired Algorithm. Ant Colony Optimization is used to increase effectiveness of edge detection in skin lesion. The second technique deals with the color space-based split-and-merge process in combination with global thresholding segmentation and edge smoothing operations.Result: The performance of both techniques has been measured by entropy performance evaluation parameter. The results show remarkable improvement in output images obtained by Canny edge detection technique optimized by ACO in comparison with ACO-Sobel, ACO- Prewitt and Edge Smoothing- Color Space techniques.Conclusion: ACO-Canny Edge detection technique shows far better effieciency for skin lesion detection as compared to ACO-Sobel,ACO-Prewitt and Edge Smoothing Color Space technique.
    Keywords:  Ant Colony Optimization; Artificial Intelligence; Canny; Color space; Edge Detection; Edge Smoothing; Prewitt; Segmentation; Skin lesions; Sobel; Threshold
    DOI:  https://doi.org/10.1080/09546634.2019.1708239
  16. J Med Syst. 2019 Dec 23. 44(2): 40
      The Industrial Revolution brought new economics and new epidemic patterns to the people, which formed the healthcare 1.0 that focused on public health solutions. The emergence of large production concept and technology brought healthcare to 2.0. Bigger hospitals and better medical education were established, and doctors were trained for specialty for better treatment quality. The size of computer shrunk. This allowed fast development of computer-based devices and information technology, leading the healthcare to 3.0. The initiation of smart medicine nowadays announces the arrival of healthcare 4.0 with new brain and new hands. It is an era of big revision of previous technologies, one of which is artificial intelligence which will lead humans to a new world that emphasizes more on advanced and continuous learnings.
    DOI:  https://doi.org/10.1007/s10916-019-1513-0
  17. J Orthop Res. 2019 Dec 28.
      The diagnostic utility of radiographic signs of complete discoid lateral meniscus remains controversial. This study aimed to investigate the diagnostic accuracy and determine which sign is most reliably detects the presence of a complete discoid lateral meniscus in children. A total of 141 knees (age 7-16) with complete discoid lateral meniscus and 141 age- and sex-matched knees with normal meniscus were included. The following radiographic signs were evaluated: lateral joint space, fibular head height, lateral tibial spine height, lateral tibial plateau obliquity, lateral femoral condyle squaring, lateral tibial plateau cupping, lateral femoral condyle notching, and prominence ratio of the femoral condyle. Prediction models were constructed using logistic regressions, decision trees, and random-forest analyses. Receiver-operating characteristic curves and area under the curve (AUC) were estimated to compare the diagnostic accuracy of the radiographic signs and model fit. The random-forest model yielded the best diagnostic accuracy (AUC: 0.909), with 86.5% sensitivity and 82.2% specificity. Lateral joint space height, fibular head height, and prominence ratio showed statistically large AUC compared to lateral tibial spine height and lateral tibial plateau obliquity (p<0.05 in all). The cut-off values for diagnosing discoid meniscus to be <12.55mm for fibular head height, <0.804 for prominence ratio, and >6.6mm for lateral joint space height when using the random-forest model. Based on the results of this study, in clinical practice, lateral joint space height, fibular head height and prominence ratio could be easily used as supplementary tools for complete discoid lateral meniscus in children. This article is protected by copyright. All rights reserved.
    Keywords:  children; complete discoid lateral meniscus; diagnosis; machine learning; radiograph
    DOI:  https://doi.org/10.1002/jor.24578
  18. Radiother Oncol. 2019 Dec 20. pii: S0167-8140(19)33491-7. [Epub ahead of print]145 1-6
       AIM: The segmentation of organs from a CT scan is a time-consuming task, which is one hindrance for adaptive radiation therapy. Through deep learning, it is possible to automatically delineate organs. Metrics like dice score do not necessarily represent the impact for clinical practice. Therefore, a clinical evaluation of the deep neural network is needed to verify the segmentation quality.
    METHODS: In this work, a novel deep neural network is trained on 300 CT and 300 artificially generated pseudo CBCTs to segment bladder, prostate, rectum and seminal vesicles from CT and cone beam CT scans. The model is evaluated on 45 CBCT and 5 CT scans through a clinical review performed by three different clinics located in Europe, North America and Australia.
    RESULTS: The deep learning model is scored either equally good (prostate and seminal vesicles) or better (bladder and rectum) than the structures from routine clinical practice. No or minor corrections are required for 97.5% of the segmentations of the bladder, 91.5% of the prostate, 94% of the rectum and seminal vesicles. Overall, for 82.5% of the patients none of the organs need major corrections or a redraw.
    CONCLUSION: This study shows that modern deep neural networks are capable of producing clinically applicable organ segmentation for the male pelvis. The model is able to produce acceptable structures as frequently as current clinical routine. Therefore, deep neural networks can simplify the clinical workflow by offering initial segmentations. The study further shows that to retain the clinicians' personal preferences a structure review and correction is necessary for structures both created by other clinicians and deep neural networks.
    Keywords:  Artificial intelligence; Deep learning; Male pelvis; Radiotherapy; Segmentation
    DOI:  https://doi.org/10.1016/j.radonc.2019.11.021
  19. Injury. 2019 Dec 09. pii: S0020-1383(19)30778-8. [Epub ahead of print]
       INTRODUCTION: Bladder rupture following blunt pelvic trauma is rare though can have significant sequelae. We sought to determine whether machine learning could help predict the presence of bladder injury using certain factors at the time of presentation of blunt pelvic trauma.
    MATERIALS AND METHODS: Adult patients at a Level I trauma center with blunt trauma pelvic fractures from January 1, 2005 to December 31, 2017 were identified. Patients with admission urinalysis data, fracture ICD 9 codes, and mechanism of injury available in the trauma registry were included. Patients with bladder rupture and pelvic fracture were compared to those with pelvic fracture alone. The classification of results was performed using the MATLAB Classification Learner Tool. The classification performances were tested by machine learning algorithms in the domains of Decision Tree, Logistic Regression, Naïve Bayes, Support Vector Machine (SVM), Nearest Neighbor (KNN), and Ensemble classifiers.
    RESULTS: Of the 3063 eligible pelvic fracture patients identified, 208 (6.8%) had concomitant bladder ruptures. Twenty machine learning algorithms were then tested based on pelvic fracture ICD-9 code, admission urinalysis, and mechanism of injury. The best classification results were obtained using the Gaussian Naïve Bayes and Kernel Naïve Bayes classifiers, both with accuracy of 97.8%, specificity of 99%, sensitivity of 83%, and area under the curve (AUC) of the ROC curve of 0.99.
    CONCLUSION: Machine learning algorithms can be used to help predict with a high level of accuracy the presence of bladder rupture with blunt pelvic trauma using readily available information at the time of presentation. This has the potential to improve selection of patients for additional imaging, while also more appropriately allocating hospital resources and reducing patient risks.
    Keywords:  Bladder rupture; Machine learning; Pelvic fracture; Pelvic trauma; Urotrauma
    DOI:  https://doi.org/10.1016/j.injury.2019.12.009
  20. J Invest Dermatol. 2020 Jan;pii: S0022-202X(19)32556-4. [Epub ahead of print]140(1): 18-20
      Identification of quantitative molecular biomarkers to distinguish melanoma from nevi is highly desirable. Expressions of microRNAs (miRNAs) are promising candidates but lack consensus in many studies. Torres et al. (2020) utilized a machine learning pipeline to identify miRNA ratios as strong biomarkers. Results indicate that machine learning, although powerful, requires human input to identify high quality biomarker signatures.
    DOI:  https://doi.org/10.1016/j.jid.2019.07.688
  21. Health Psychol Res. 2019 Dec 24. 7(2): 8559
      Man and technology seem to co-evolve into a process of reciprocal conditioning. On the one hand, the man modifies (and evolves) the technology according to his needs, on the other the technology revolutionates the man and the way in which it lives. Psychology, therefore, as a discipline afferent to the human sciences, is called to take an interest in this relationship and to understand its complexity. A fundamental role in this sense is covered by "cyberpsychology" which investigates all those psychological phenomena that are associated with technology and aims to analyse the processes of change triggered by the interaction between man and the new media. From the psychological point of view, if on the one hand it is important to understand how man changes in contact with the new technologies and to what problems he is meeting. On the other hand, there is a need to understand how new technologies, given their transformative potential, can find a place within the therapeutic practice. In this regard, some of the technologies used in the clinical field have been analysed including: virtual reality, biosensors, artificial intelligence and affective computing. With the aim of understanding to what extent and how technological progress and the emergence of new technologies can contribute and generate value within the psychological panorama. Following the PRISMA statement a bibliographic research was carried out, which provided for consultation of the Medline and PsycINFO databases. The criteria according to which works have been selected rather than others refer to their precision and sensitivity with which they propose to treat technological applications in the field of health psychology and from this the emergence of the new theme "CYBER HEALTH PSYCHOLOGY". The results of the research suggest that the integrated use of psychological techniques and new technologies is extremely productive in terms of potential improvement of health and therefore of "health empowerment". In this vision, new technologies are not intended to replace traditional procedures but to integrate them by making available features and potential that man does not have in nature. Given the great potential of the instruments analyzed that still today continue to evolve and refine it is advisable to know them, validate their effectiveness and adapt our operational models to new realities.
    Keywords:  Artificial intelligence; Avatar therapy; Biofeedback in psychology. Contributions: The authors equally contributed to this paper with conception and design of the study; Cyberpsychology; Virtual reality; and final approval of the final version
    DOI:  https://doi.org/10.4081/hpr.2019.8559
  22. J Psychiatr Res. 2019 Dec 06. pii: S0022-3956(19)30887-8. [Epub ahead of print]121 189-196
      A growing literature is utilizing machine learning methods to develop psychopathology risk algorithms that can be used to inform preventive intervention. However, efforts to develop algorithms for internalizing disorder onset have been limited. The goal of this study was to utilize prospective survey data and ensemble machine learning to develop algorithms predicting adult onset internalizing disorders. The data were from Waves 1-2 of the National Epidemiological Survey on Alcohol and Related Conditions (n = 34,653). Outcomes were incident occurrence of DSM-IV generalized anxiety, panic, social phobia, depression, and mania between Waves 1-2. In total, 213 risk factors (features) were operationalized based on their presence/occurrence at the time of or before Wave 1. For each of the five internalizing disorder outcomes, super learning was used to generate a composite algorithm from several linear and non-linear classifiers (e.g., random forests, k-nearest neighbors). AUCs achieved by the cross-validated super learner ensembles were in the range of 0.76 (depression) to 0.83 (mania), and were higher than AUCs achieved by the individual algorithms. Individuals in the top 10% of super learner predicted risk accounted for 37.97% (depression) to 53.39% (social anxiety) of all incident cases. Thus, the algorithms achieved acceptable-to-excellent prediction accuracy with a high concentration of incident cases observed among individuals predicted to be highest risk. In parallel with the development of effective preventive interventions, further validation, expansion, and dissemination of algorithms predicting internalizing disorder onset/trajectory could be of great value.
    Keywords:  Algorithm; Anxiety; Incidence; Machine learning; Mood; Risk score
    DOI:  https://doi.org/10.1016/j.jpsychires.2019.12.006