bims-librar Biomed News
on Biomedical librarianship
Issue of 2022‒12‒11
fourteen papers selected by
Thomas Krichel
Open Library Society


  1. J Can Health Libr Assoc. 2022 Dec;43(3): 93-103
      Introduction: Evidence-based practice is an important aspect of health science librarianship. However, good evidence-based practice can only occur if the body of evidence is also of adequate quality. By using bibliometric techniques to map the health science librarianship research field, one can better understand the properties of the evidence base in health science librarianship.Methods: The Library Literature & Information Science Full Text database was used to generate a bibliography of publications pertaining to health librarianship limited to the time span of 2012-2022. Using Excel and Microsoft Power BI, a descriptive analysis was conducted. VosViewer was used to create a subject term co-occurrence map.
    Results: The average number of publications per year is 207.3 and it was trending downwards for 2012-2022. The most frequently assigned subject term was "survey". The average number of authors per paper is 2.5 and was trending upwards. The subject term co-occurrence map identified 5 clusters of keywords, which were interpreted as major themes found in the body of literature.
    Discussion: The 5 keyword clusters were interpreted as major themes found in the body of literature. The identified themes were professional development, measuring the value output of librarian services, measuring the return on investment of library resources, improving the quality of LIS research, and outreach to other library and healthcare institutions. This depicts the health science librarianship research landscape as one of collaboration, concerned with finding ways of demonstrating value, and connecting with other types of libraries and the public.
    DOI:  https://doi.org/10.29173/jchla29626
  2. Patterns (N Y). 2022 Dec 01. 100659
      A significant percentage of COVID-19 survivors experience ongoing multisystemic symptoms that often affect daily living, a condition known as Long Covid or post-acute-sequelae of SARS-CoV-2 infection. However, identifying scientific articles relevant to Long Covid is challenging since there is no standardized or consensus terminology. We developed an iterative human-in-the-loop machine learning framework combining data programming with active learning into a robust ensemble model, demonstrating higher specificity and considerably higher sensitivity than other methods. Analysis of the Long Covid collection shows that (1) most Long Covid articles do not refer to Long Covid by any name (2) when the condition is named, the name used most frequently in the literature is Long Covid, and (3) Long Covid is associated with disorders in a wide variety of body systems. The Long Covid collection is updated weekly and is searchable online at the LitCovid portal: https://www.ncbi.nlm.nih.gov/research/coronavirus/docsum?filters=e_condition.LongCovid.
    Keywords:  COVID-19; Long Covid; active learning; data programming; machine learning; natural language processing; post-acute sequelae of SARS-CoV-2 infection; text classification; weak supervision
    DOI:  https://doi.org/10.1016/j.patter.2022.100659
  3. BMC Med Res Methodol. 2022 12 03. 22(1): 310
      BACKGROUND: Search filters are standardised sets of search terms, with validated performance, that are designed to retrieve studies with specific characteristics. A cost-utility analysis (CUA) is the preferred type of economic evaluation to underpin decision-making at the National Institute for Health and Care Excellence (NICE). Until now, when searching for economic evidence for NICE guidelines, we have used a broad set of health economic-related search terms, even when the reviewer's interest is confined to CUAs alone.METHODS: We developed search filters to retrieve CUAs from MEDLINE and Embase. Our aim was to achieve recall of 90% or better across both databases while reducing the overall yield compared with our existing broad economic filter. We used the relative recall method along with topic expert input to derive and validate 3 pairs of filters, assessed by their ability to identify a gold-standard set of CUAs that had been used in published NICE guidelines. We developed and validated MEDLINE and Embase filters in pairs (testing whether, when used together, they find target studies in at least 1 database), as this is how they are used in practice. We examined the proxy-precision of our new filters by comparing their overall yield with our previous approach using publications indexed in a randomly selected year (2010).
    RESULTS: All 3 filter-pairs exceeded our target recall and led to substantial improvements in search proxy-precision. Our paired 'sensitive' filters achieved 100% recall (95% CI 99.0 to 100%) in the validation set. Our paired 'precise' filters also had very good recall (97.6% [95%CI: 95.4 to 98.9%]). We estimate that, compared with our previous search strategy, using the paired 'sensitive' filters would reduce reviewer screening burden by a factor of 5 and the 'precise' versions would do so by a factor of more than 20.
    CONCLUSIONS: Each of the 3 paired cost-utility filters enable the identification of almost all CUAs from MEDLINE and Embase from the validation set, with substantial savings in screening workload compared to our previous search practice. We would encourage other researchers who regularly use multiple databases to consider validating search filters in combination as this will better reflect how they use databases in their everyday work.
    Keywords:  Cost-utility; Evidence selection; Paired analysis; Relative recall; Search filters
    DOI:  https://doi.org/10.1186/s12874-022-01796-2
  4. Database (Oxford). 2022 Dec 09. pii: baac104. [Epub ahead of print]2022
      The scientific literature continues to grow at an ever-increasing rate. Considering that thousands of new articles are published every week, it is obvious how challenging it is to keep up with newly published literature on a regular basis. Using a recommender system that improves the user experience in the online environment can be a solution to this problem. In the present study, we aimed to develop a web-based article recommender service, called Emati. Since the data are text-based by nature and we wanted our system to be independent of the number of users, a content-based approach has been adopted in this study. A supervised machine learning model has been proposed to generate article recommendations. Two different supervised learning approaches, namely the naïve Bayes model with Term Frequency-Inverse Document Frequency (TF-IDF) vectorizer and the state-of-the-art language model bidirectional encoder representations from transformers (BERT), have been implemented. In the first one, a list of documents is converted into TF-IDF-weighted features and fed into a classifier to distinguish relevant articles from irrelevant ones. Multinomial naïve Bayes algorithm is used as a classifier since, along with the class label, it also gives the probability that the input belongs to this class. The second approach is based on fine-tuning the pretrained state-of-the-art language model BERT for the text classification task. Emati provides a weekly updated list of article recommendations and presents it to the user, sorted by probability scores. New article recommendations are also sent to users' email addresses on a weekly basis. Additionally, Emati has a personalized search feature to search online services' (such as PubMed and arXiv) content and have the results sorted by the user's classifier. Database URL: https://emati.biotec.tu-dresden.de.
    DOI:  https://doi.org/10.1093/database/baac104
  5. Am J Perinatol. 2022 Dec 05.
      OBJECTIVE:  Internet-based patient education materials (PEMs) are often above the recommended sixth grade reading level recommended by the U.S. Department of Health and Human Services. In 2016 the U.S. Food and Drug Administration (FDA) released a warning statement against use of general anesthetic drugs in children and pregnant women due to concerns about neurotoxicity. The aim of this study is to evaluate readability, content, and quality of Internet-based PEMs on anesthesia in the pediatric population and neurotoxicity.STUDY DESIGN:  The websites of U.S. medical centers with pediatric anesthesiology fellowship programs were searched for PEMs pertaining to pediatric anesthesia and neurotoxicity. Readability was assessed. PEM content was evaluated using matrices specific to pediatric anesthesia and neurotoxicity. PEM quality was assessed with the Patient Education Material Assessment Tool for Print. A one-sample t-test was used to compare the readability of the PEMs to the recommended sixth grade reading level.
    RESULTS:  We identified 27 PEMs pertaining to pediatric anesthesia and eight to neurotoxicity. Mean readability of all PEMs was greater than a sixth grade reading (p <0.001). While only 13% of PEMs on anesthesia for pediatric patient mentioned the FDA warning, 100% of the neurotoxicity materials did. PEMs had good understandability (83%) and poor actionability (60%).
    CONCLUSION:  The readability, content, and quality of PEMs are poor and should be improved to help parents and guardians make informed decisions about their children's health care.
    KEY POINTS: · The FDA issued a warning statement against the use of general anesthetic drugs in children and pregnant women.. · Readability, content, and quality of Internet-based patient education materials on the topic of neurotoxicity are poor.. · Improving the readability, content, and quality of PEMs could aid parents in making important health care decisions..
    DOI:  https://doi.org/10.1055/s-0042-1754408
  6. J Oral Rehabil. 2022 Dec 07.
      BACKGROUND: Despite increasing scientific interest in the effectiveness of mandibular advancement device (MAD) for the treatment of obstructive sleep apnea (OSA), laypeople lack knowledge about this treatment option.OBJECTIVES: To investigate content, quality and readability of the online information regarding MAD.
    METHODS: Google, Yahoo and Bing were searched for "sleep apnea", "mandibular advancement device" and "oral appliance". Websites were analyzed for content (multidisciplinary care team, qualified dentist, treatment contraindications and side effects), as well as for quality (DISCERN instrument, HONcode) and readability scores (Flesch Reading Ease, FRE, and Flesch-Kincaid Reading Grade, FKG).
    RESULTS: 155 websites were included: 53% from health professionals, 20% commercial, 17% academic, 10% from non-health professionals. Content was incomplete, especially for commercial ones. 71.61% websites failed to acknowledge treatment contraindications, approximately 40.00% did not mention side effects and the need for a multidisciplinary care team, while 22.58% did not address the need to consult a qualified dentist. Quality and reliability were poor. Mean DISCERN score was 39.93 (95% CI 37.90-41.96), with lower scores for commercial websites compared with others. Only nine websites displayed HONcode certification. Readability was quite difficult, with mean FRE score of 59.50 (95% CI 57.58-61.42) and mean FKG level of 6.92 (95% CI 6.64-7.21).
    CONCLUSION: Health care professionals should be aware that currently available online information do not fulfill the most important aspects of MAD therapy and may be difficult to understand by laypeople. This could contribute to cause delays in appropriate OSA care and unrealistic treatment expectations, increasing the risk of treatment discontinuation.
    Keywords:  internet; mandibular advancement devices; obstructive sleep apnea; online health; quality; readability
    DOI:  https://doi.org/10.1111/joor.13400
  7. J Med Internet Res. 2022 Dec 06. 24(12): e41219
      BACKGROUND: The internet provides general users with wide access to medical information. However, regulating and controlling the quality and reliability of the considerable volume of available data is challenging, thus generating concerns about the consequences of inaccurate health care-related documentation. Several tools have been proposed to increase the transparency and overall trustworthiness of medical information present on the web.OBJECTIVE: We aimed to analyze and compare the quality and reliability of information about percutaneous coronary intervention on English, German, Hungarian, Romanian, and Russian language websites.
    METHODS: Following a rigorous protocol, 125 websites were selected, 25 for each language sub-sample. The websites were assessed concerning their general characteristics, compliance with a set of eEurope 2002 credibility criteria, and quality of the informational content (namely completeness and accuracy), based on a topic-specific benchmark. Completeness and accuracy were graded independently by 2 evaluators. Scores were reported on a scale from 0 to 10. The 5 language subsamples were compared regarding credibility, completeness, and accuracy. Correlations between credibility scores on the one hand, and completeness and accuracy scores, on the other hand, were tested within each language subsample.
    RESULTS: The websites' compliance with credibility criteria was average at best with scores between 3.0 and 6.0. In terms of completeness and accuracy, the website subsets qualified as poor or average, with scores ranging from 2.4 to 4.6 and 3.6 to 5.3, respectively. English language websites scored significantly higher in all 3 aspects, followed by German and Hungarian language websites. Only German language websites showed a significant correlation between credibility and information quality.
    CONCLUSIONS: The quality of websites in English, German, Hungarian, Romanian, and Russian languages about percutaneous coronary intervention was rather inadequate and may raise concerns regarding their impact on informed decision-making. Using credibility criteria as indicators of information quality may not be warranted, as credibility scores were only exceptionally correlated with content quality. The study brings valuable descriptive data on the quality of web-based information regarding percutaneous coronary intervention in multiple languages and raises awareness about the need for responsible use of health-related web resources.
    Keywords:  consumer health informatics; content quality; credibility; health education; health information; informed decision-making; internet; medical information; percutaneous coronary intervention; quality; reliability
    DOI:  https://doi.org/10.2196/41219
  8. Artif Organs. 2022 Dec 07.
      BACKGROUND: As patients seek online health information to supplement their medical decision-making, the aim of this study is to assess the quality and readability of internet information on the left ventricular assist device (LVAD).METHODS: Three online search engines (Google, Bing, and Yahoo) were searched for "LVAD" and "Left ventricular assist device". Included websites were classified as academic, foundation/advocacy, hospital-affiliated, commercial, or unspecified. The quality of information was assessed using the JAMA benchmark criteria (0-4), DISCERN tool (16-80), and presence of Health On the Net code (HONcode) accreditation. Readability was assessed using the Flesch Reading Ease score.
    RESULTS: 38 unique websites were included. The average JAMA and DISCERN scores of all websites were 0.82 ± 1.11 and 52.45 ± 13.51, respectively. Academic sites had a significantly lower JAMA mean score than commercial (p<0.001) and unspecified (p<0.001) websites, as well as a significantly lower DISCERN mean score than commercial sites (p=0.002). HONcode certification was present in 6 (15%) websites analyzed, which had significantly higher JAMA (p<0.001) and DISCERN (p<0.016) mean scores than sites without HONcode certification. Readability was fairly difficult and at the level of high school students.
    CONCLUSIONS: The quality of online information on the LVAD is variable, and overall readability exceeds the recommended level for the public. Patients accessing online information on the LVAD should be referred to sites with HONcode accreditation. Academic institutions must provide higher quality online patient literature on LVADs.
    Keywords:  DISCERN; Health literacy; Heart failure; JAMA; Online information; Patient education; Readability; Ventricular assist device
    DOI:  https://doi.org/10.1111/aor.14479
  9. J Stroke Cerebrovasc Dis. 2022 Nov 30. pii: S1052-3057(22)00606-1. [Epub ahead of print]32(2): 106914
      OBJECTIVES: Acute ischemic stroke is a leading cause of mortality and long-term disability. Mechanical thrombectomy can effectively treat large artery occlusions. This study aimed to evaluate the quality, reliability, and usefulness of videos on mechanical thrombectomy on YT using quantitative and qualitative analyses.MATERIALS AND METHODS: Video searches were performed by entering the following keywords into the YT search bar: "endovascular thrombectomy," "endovascular treatment of acute stroke," "mechanical thrombectomy," "stroke stent retriever," and "stent retriever thrombectomy." For each search term, the top 35 videos were reviewed. The videos were analyzed by two independent raters using the DISCERN and JAMA scoring systems. Qualitative and quantitative data were recorded for each video.
    RESULTS: A total of 150 videos were analyzed. The mean DISCERN score was 41.26, and the mean JAMA score was 1.42. Of the videos, 5.3% were categorized as very poor, 33.3% as poor, 44% as fair, 12% as good, and 5.3% as excellent. The videos that included qualitative features, such as clear information, symptomatology, diagnosis, treatment response, prognosis, etiology, epidemiology, diagram, and radiological image, had significantly higher DISCERN and JAMA scores than their counterparts.
    CONCLUSION: A reliable and useful content is not available for individuals searching for information on mechanical thrombectomy on YT. In this paper, we highlighted key points for content creators to increase the quality and audience engagement statistics of their videos. Information provided in YT videos should be verified using more reliable sources.
    Keywords:  DISCERN; Mechanical thrombectomy; Quality; Stroke; YouTube
    DOI:  https://doi.org/10.1016/j.jstrokecerebrovasdis.2022.106914
  10. Epilepsy Behav. 2022 Dec 03. pii: S1525-5050(22)00466-8. [Epub ahead of print]138 109017
      OBJECTIVE: Cannabidiol (CBD) oil has been used for the treatment of refractory epilepsy for a long time. In this study, we aimed to investigate the quality and reliability of YouTube videos pertaining to the use of CBD oil in the treatment of epilepsy.METHODS: A total of 100 videos were reviewed. Evaluation of the videos were performed by two experienced neurologists at the same time, but in different settings in order to prevent bias. Videos' image type, video content, video length, upload date, daily view count, comment and like counts, qualification of uploaders, DISCERN, and GQS scores were recorded.
    RESULTS: The videos were found to be uploaded by physicians (46 %), health channels (33 %), TV channels (7 %), patients (2 %), and other persons (12 %). The mean DISCERN score was found as 3.71 ± 1.17 and the mean GQS score was found as 3.21 ± 1.05 in all videos. According to the DISCERN scale, the videos uploaded by doctors were scored as 3.82 ± 1.02 and the videos uploaded by non-doctors as 3.07 ± 1.12 (p < 0.001). According to the GQS scale, the videos uploaded by doctors were scored as 3.51 ± 1.02 and the videos uploaded by non-doctors as 3.01 ± 1.17 (p < 0.001).
    CONCLUSION: Thirty-two (32%) videos were poor, 43 (43%) videos were moderate, and only 25 (25%) videos were good in terms of quality and reliability. YouTube videos related to health issues need to be audited strictly before they can become publicly accessible.
    Keywords:  CBD oil; Cannabidiol; Epilepsy; YouTube
    DOI:  https://doi.org/10.1016/j.yebeh.2022.109017
  11. Int J Dent Hyg. 2022 Dec 08.
      AIM: To evaluate the content and quality of YouTube™️ videos about pit and fissure sealant application.METHODS: The keywords "fissure sealant" and "pit and fissure sealant" were used to search for YouTube™️ content, after the first 300 video results were evaluated. Considering the exclusion criteria, a final sample of 110 videos was obtained, which were analyzed in terms of the number of views, duration in minutes, the number of subscribers, the total number of "likes" and "dislikes," the number of comments, days elapsed since upload, the interaction index, and the viewing rate. Global Quality Scoring (GQS) index was also used as a second evaluation method and videos were classified according to the quality of information that it contained as follows: good, moderate, and poor.
    RESULTS: Most of the videos were uploaded by dentists/specialists. Among them, 14 videos had good quality, 63 were of moderate quality, and 33 were poor informative. The good-quality videos had statistically significantly higher interaction index and viewing rates, and the majority of the videos had a GQS-2 score. The viewing rate was positively correlated with duration and the number of views, "likes" and "dislikes," comments, and subscribers.
    CONCLUSION: There is considerable variability in the scientific accuracy and quality of health information on the Internet. While there are videos that provide sufficient health information, there are also videos that contain insufficient or even incorrect information. Dental care professionals should be aware of misinformation found on YouTube™️ and ensure that patients always have access to accurate and reliable information.
    Keywords:  Global Quality Score; Pit and fissure sealants; YouTube™; internet
    DOI:  https://doi.org/10.1111/idh.12646
  12. J Fr Ophtalmol. 2022 Nov 30. pii: S0181-5512(22)00399-0. [Epub ahead of print]
      OBJECTIVE: This study analyzed the quality and reliability of videos on YouTube as educational resources on trifocal intraocular lenses (IOL).METHODS: This study is a retrospective, cross-sectional and record-based study. An online YouTube search was performed using the terms "trifocal lens implants" and "trifocal IOL," and a total of 229 videos were recorded. Eighty-six videos that met the study criteria were included. All videos were evaluated with DISCERN, Journal of the American Medical Association (JAMA), and Global Quality Score (GQS) scores.
    RESULTS: The mean DISCERN, JAMA, and GQS scores were 37.79±11.92, 2.01±0.87, and 2.17±1.01, respectively. Of all the videos, 39 (45%) were uploaded by physicians, and 47 (55%) were uploaded by non-physicians. While the length of the videos (length) was significantly greater in the physician group (P=0.02), the age of the videos (age) was significantly higher in the non-physician group (P=0.02). However, the differences between the two groups in terms of other general characteristics, DISCERN, JAMA and GQS scores were not significant.
    CONCLUSIONS: Our finding suggests that trifocal IOL-related YouTube™ videos are of low quality and reliability, thus inadequate for patient information.
    Keywords:  DISCERN; Global Quality Score; Implant trifocal; Journal of the American Medical Association; Trifocal intraocular lens; YouTube; Youtube
    DOI:  https://doi.org/10.1016/j.jfo.2022.05.029
  13. Int Ophthalmol. 2022 Dec 09.
      PURPOSE: It is aimed to determine the utility, reliability and quality of the lid loading videos on YouTube, a video sharing platform.METHODS: A YouTube searches were made with the keywords 'Eyelid Loading,' 'Gold Weight Implantation,' 'Lid Loading for Lagophthalmos' (without user login, cleared search history, in incognito tab). A total of 75 videos were recorded. Length of videos (seconds), number of views, uploaded source (doctor/health institution/medical channel), number of subscribers, number of likes, time since uploading (days), video content (surgical/theoretical information), type of narration (verbal narration/subtitle) were recorded. DISCERN, The Journal of the American Medical Association (JAMA), and Global Quality Scores of the videos were evaluated and recorded by two experienced oculoplastic surgeons (KSC, HT).
    RESULTS: After the exclusion criteria, the remaining 46 videos were included in the study. The mean DISCERN score was 25.17 ± 6.88 (very poor quality), the JAMA score was 0.79 ± 0.63 (very poor quality), and GQS was 2.84 ± 1.03 (medium quality). Thirty videos (65.2%) had verbal narration, and 16 videos (34.8%) had subtitled narration. The DISCERN score and GQS were significantly higher in the videos with verbal narration compared to the narration with subtitles (p < 0.05). All three scores were positively correlated with each other. There was also a positive correlation between video length, number of subscribers, and DISCERN score.
    CONCLUSIONS: The videos about lid loading on YouTube are of poor reliability, accuracy, and educational quality. The duration of the video and the type of narration can be kept in the foreground when choosing the video. Experts must review the content that is uploaded to websites like YouTube.
    Keywords:  DISCERN score; Global Quality Score; Journal of the American Medical Association Score; Lid loading; YouTube
    DOI:  https://doi.org/10.1007/s10792-022-02606-w